mmm
TRANSCRIPT
EVALUATION USE IN
NON-GOVERNMENTAL ORGANIZATIONS Unlocking the “Do – Learn – Plan” Continuum
A Thesis Presented to the Faculty
Of The Fletcher School of Law and Diplomacy
By
LAKSHMI KARAN
In partial fulfillment of the requirements for the Degree of Doctor of Philosophy
APRIL 2009
Dissertation Committee KAREN JACOBSEN, Co-Chair JOHN HAMMOCK, Co-Chair
ADIL NAJAM
UMI Number: 3359808
Copyright 2009 by Karan, Lakshmi
All rights reserved
INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
______________________________________________________________
UMI Microform 3359808Copyright 2009 by ProQuest LLC
All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.
_______________________________________________________________
ProQuest LLC 789 East Eisenhower Parkway
P.O. Box 1346 Ann Arbor, MI 48106-1346
Lakshmi Karan Ml
ountain View, CA [email protected]
Highly creative and experienced leader with over 15 years of strategy and management experience in the non-profit and high-tech sector UNIQUE SKILLS
Understand Effectiveness: A deep understanding of how organizations can sustainably scale towards higher impact.
Develop Strategies: Working with leadership and program teams, develop strategies to increase effectiveness and efficiency. Translate strategies into workable action plans.
Drive Impact: Define key success indicators and mobilize the team to focus on these indicators to achieve impact. Work with stakeholders to create shared-understanding. Measure results.
Deliver Results: Leverage opportunities and strengths to maximize impact through effective tools/processes and focused execution. Inspire a learning culture.
PROFESSIONAL EXPERIENCE
The Skoll Foundation 2007−present Director, Impact Assessment, Learning and Utilization
• Developed and led processes that review grant portfolio performance at mid-point and for follow-on investment.
• Built systems to track metrics and guide staff learning to inform program decision-making.
• Implemented process efficiencies in the selection of Skoll Awardees. • Prepared and presented investment recommendations and learning to Board. • Programmed annual convening of Awardees at the Skoll World Forum at
Oxford.
Alchemy Project, Boston, MA 2002−2004 Program Manager
• Developed the criteria for selection of refugee livelihood programs in Africa; and disbursed over $200,000 in grants.
• Initiated and created an analytical model using SPSS to measure and assess program performance. This model enabled the project to report statistically on its achievements and also track its long-term impact.
• Prepared annual reports for donors and funding proposals. • Managed a team of 10 field researchers. This included contract negotiations,
budget allocation, logistical and technical support.
Reebok Human Rights Program, Canton, MA 2001-2002 Management Consultant
• Monitored compliance of Reebok supplier factories, worldwide, to worker standards guidelines.
• Commissioned audits; designed and implemented corrective action steps.
Cap Gemini, Boston, MA 1994-1998 Senior Consultant
• Led a team that designed, developed and implemented a call-center tracking system for a healthcare management center. This system eliminated customer service response delays by 30% and reduced client staffing costs by 10%.
• Migrated mainframe-based human resources system to PeopleSoft, for an insurance firm. This required careful process mapping, rationalization of data conversion options and creation of testing modules.
• Collaborated with the sales team and developed several client proposals. Expanded Cap Gemini’s opportunities to bring additional revenue of over $1 million.
• Mentored junior consultants and summer interns.
EDUCATION
The Fletcher School of Law and Diplomacy Tufts University, Medford, MA Ph.D. in International Relations (2009) Fields of Study: Organizational Learning; Nonprofit Management and Evaluation Thesis: Evaluation Use in Non-Governmental Organizations: Unlocking the “Do – Learn – Plan” Continuum Masters of Arts in Law and Diplomacy (2000) Slawson Fellow Fields of Study: Human Rights, Humanitarian Studies and Conflict Resolution Thesis: Combating Human-Trafficking: A case study of Nepal National Institute of Information Technology, Madras, India Graduate Diploma in Systems Management (1992) Excellent Honors Madras University, Madras, India Bachelor of Science, Mathematics (1990)
OTHER Volunteer, MAITRI, a support organization for domestic violence victims. Board Member, Inspire America Media.
ABSTRACT This dissertation explored the factors that influence evaluation use and the challenges
non-governmental organizations (NGOs) face in adapting learning practices and
systems that enable use. While there has been much theoretical work done on
evaluation use and learning in general how NGOs can build systems and practices to
promote use has been missing. The research addressed this gap – it developed a utility
model that identifies the key factors that influence use and the practical steps NGOs
can take to implement the model.
To get at the answers, the research reviewed the theoretical models - within
evaluation and organizational theory - that promote use; conducted a survey to
understand the current state of use within the NGO sector and the systems that
provide an effective link between doing evaluations, knowing the results and learning
from them.
The final evaluation utility model presents a fundamental shift in how NGOs must
approach program evaluation. It challenges the conventional thinking in the NGO
sector with the notion that it is no longer sufficient to focus on use only at the
program level. The utility model revealed that influencing factors must extend to
include the larger context of organizational behavior and learning.
ACKNOWLEDGEMENTS There are many people to whom I owe a great debt of gratitude for assisting me
throughout my doctoral studies. While I name only a few here, it must be
acknowledged that this accomplishment has been a result of a collective effort of
goodwill, support and encouragement from friends and family, from around the
world. First, I would like to express my sincere appreciation to my Committee
without whom this dissertation would neither have been started nor completed. My
Co-Chairs, Dr. Karen Jacobsen, who over the years provided the gentle nudge that
helped me, maintain the momentum throughout. Dr. John Hammock, whose
commitment and confidence was invaluable in the final home stretch. Dr. Adil
Najam, whose critical and insightful feedback helped develop a higher quality
product.
I have many sets of families to thank who helped me along the way. Work colleagues
at the Skoll Foundation for their support, encouragement and confidence. To my
parents and sister, whose steadfast love and prayers always kept the positive energy
flowing around me, I offer my eternal love and gratitude. To Doss, my best friend, I
am grateful for many things – but singularly for keeping faith when I faltered. Thank
you for the sacrifices you have made over these years so I can achieve this dream.
Finally, this dissertation would not have been a reality without Sonu, who has been a
loving companion these long years and continues to teach me the simple joys of being
in the moment and relishing life.
LIST OF TABLES & CHARTS Table 2.1 Primary programming contexts of organizations participating in the survey
..................................................................................................................................... 23
Table 2.2 - Role of the survey respondents................................................................. 26
Table 2.3 – Experience level of survey respondents................................................... 27
Table 2.4 – Participating organizations along with the number of respondents from
each organization ........................................................................................................ 28
Table 3.1 – Advantages/Disadvantages of Internal and External Evaluations ........... 38
Table 3.2 - A model of Outcomes of Evaluation Influence........................................ 52
Table 3.3 Changes in U.S. International NGO Sector, 1970-94................................. 69
Table 3.4 Growth in Revenue of Northern NGOs Involved in International Relief and
Development ............................................................................................................... 70
Table 3.5 - Statistic on the U.S. Nonprofit sector....................................................... 71
Table 3.6 – Outcome Mapping factors that enhance utilization ................................. 89
Table 3.7 – Organizational Learning Definitions ..................................................... 102
Table 4.1 – Intended users grouping......................................................................... 127
Chart 4.1 – Intended users grouping ........................................................................ 127
Table 4.2 – Involvement of potential users in planning an evaluation ..................... 128
Table 4.3 – Importance of involving potential users ................................................ 129
Table 4.4 – Uses of program evaluations.................................................................. 130
Chart 4.2 – Uses of program evaluations................................................................. 131
Table 4.5 – Criteria that impact evaluation use ........................................................ 132
Chart 4.3 – Criteria that impact evaluation use ....................................................... 133
Table 4.6 – Participation in evaluation planning ...................................................... 134
Table 4.7 – Evaluation report interests ..................................................................... 135
Chart 4.4 – Evaluation report interests .................................................................... 135
Table 4.8 – Program evaluation timing..................................................................... 136
Table 4.9 – Evaluation reports expectations ............................................................. 137
Table 4.10 – Evaluation recommendations specificity #1 ........................................ 138
Table 4.11 – Evaluation recommendations specificity #2 ........................................ 138
Table 4.12 – Evaluation follow-up ........................................................................... 139
Table 4.13 – Decision-making models ..................................................................... 140
Table 4.14 – Drivers of program change .................................................................. 141
Chart 4.5 – Drivers of program change ................................................................... 142
Table 4.15 – Prevalence of evaluation use process................................................... 143
Table 5.1 – Mapping practical steps to the factors that influence evaluation use .... 163
LIST OF FIGURES
Figure 3.1 Kirkhart’s integrated.................................................................................. 49
theory of influence ...................................................................................................... 49
Figure 3.2 Evaluation Use Relationships.................................................................... 55
Figure 3.3 Campbell’s implicit process-model........................................................... 57
Figure 3.4 Scriven’s summative model ...................................................................... 58
Figure 3.5 Weiss’s implicit decision model................................................................ 59
Figure 3.6 Wholey’s resource-dependent model ........................................................ 59
Figure 3.7 Cronbach’s process model......................................................................... 60
Figure 3.8 Rossi’s process model ............................................................................... 60
Figure 3.9 Green’s participatory evaluation process .................................................. 61
Figure 3.10 Cousins and Leithwood utilization model............................................... 62
Figure 3.11 Alkin’s factor model................................................................................ 63
Figure 3.12 Patton’s utilization-focused evaluation framework................................. 64
Figure 3.13 Evaluations filed in ALNAP Evaluative Reports Database .................... 75
Figure 3.14 the Research and Policy in Development Framework ............................ 84
Figure 3.15 Outcome Mapping Framework................................................................ 88
Figure 4.1 Tools to keep evaluation findings current in organization memory........ 144
Figure 4.2 Processes that can increase use................................................................ 146
Figure 4.3 Reasons why evaluations get referred or not........................................... 148
Figure 5.1: The Utility Model................................................................................... 150
Figure 5.2 – Evaluation use and decision-making groups ........................................ 154
Figure 5.3 – Practical Steps at the Planning and Execution Phase ........................... 164
Figure 5.4 – Practical Steps at the Follow-up Phase................................................. 169
TABLE OF CONTENTS
Chapter 1: Introduction ....................................................................................................... 1
Research Context ............................................................................................................ 1
The Problem: the under-utilization of evaluation in NGOs............................................ 3
Purpose of this Research................................................................................................. 5
Methodology................................................................................................................... 7
Theories that Frame Research......................................................................................... 9
Research Findings and Conclusion............................................................................... 12
Dissertation Organization ............................................................................................. 13
Chapter 2: Methodology ................................................................................................... 14
Proposition and Research Questions............................................................................. 14
Research Structure ........................................................................................................ 16
Stage 1: Theoretical Review ..................................................................................... 18
Stage 2: Survey ......................................................................................................... 22
Chapter 3: Literature review ............................................................................................. 34
Evaluation Utilization ................................................................................................... 34
Definitions................................................................................................................. 34
1960s through 1970s: The Foundation Years ........................................................... 41
1980s through 1990s: The rise of context in evaluation theory................................ 43
The 21st Century: Stretching the boundaries beyond use.......................................... 48
Process Models of Evaluation Use ........................................................................... 57
Program Evaluation Systems in NGOs......................................................................... 65
Definitions................................................................................................................. 65
Growth of the NGO Sector ....................................................................................... 69
Current Use of Evaluations in NGOs........................................................................ 74
Barriers to Evaluation Use in NGOs......................................................................... 95
Organizational Learning ............................................................................................. 102
Definitions............................................................................................................... 102
Types of Learning ................................................................................................... 104
Levels of Learning .................................................................................................. 106
Leading Theorists.................................................................................................... 108
Main Constructs ...................................................................................................... 114
Evaluation Use and Organization Learning............................................................ 122
Chapter 4: Presentation of Survey Results...................................................................... 126
Stage 1: Evaluation Planning...................................................................................... 126
Stage 2: Evaluation Implementation........................................................................... 133
Stage 3: Evaluation Follow-Up................................................................................... 139
Chapter 5: The Utility Model.......................................................................................... 149
Explanation of Model ................................................................................................. 150
Steps to Implement the Model .................................................................................... 161
Practical Steps at the Program Level ...................................................................... 164
Practical Steps at the Organization Level ............................................................... 171
Chapter 6: Conclusion..................................................................................................... 176
Recommendations for Future Research...................................................................... 179
REFERENCE LIST ........................................................................................................ 181
Appendix A – Evaluation Use in Non-Governmental Organizations Survey ................ 190
Appendix B – Master List of US Based NGOs with an International Focus ................. 198
Appendix C – Survey Population ................................................................................... 212
Chapter 1: Introduction
Research Context
Over the last two decades there has been a dramatic growth in the number of
non-governmental organizations (NGOs) involved in development and humanitarian
aid, in both the developed and developing countries. The total amount of public funds
being channeled through NGOs has also grown significantly and the proportion of aid
going through NGOs, relative to bilateral or multilateral agencies, has also increased.
The European Union funding for international NGOs in the mid-1970s had a budget
of USD $3.2 million which by 1995 reached an estimated USD $1 billion; accounting
for somewhere between 15-20% of all EU foreign aid1. In 2006, the EU budget for
the non-profit sector as a whole was close to €55 billion2. Strengthened by enormous
funding commitments the number of NGOs grew worldwide and began to establish
themselves as experts in all aspects of development and humanitarian issues.
Associated with this growth has been an increasing concern about the
efficiencies of NGO policies and practices3. These debates were greatly influenced by
the changing donor environment, whose emphasis on quality management resulted in
several NGOs adopting processes that contribute to increased transparency,
1 Kerker Carlsson, Gunnar Kohlin, and Anders Ekbom, The Political Economy of Evaluation: International Aid Agencies and the Effectiveness of Aid (New York: St. Martin's Press, 1994). 2 http://www.idseurope.org/en/budget2006.en.pdf and http://ec.europa.eu/budget/index_en.htm 3 Harry P. Hatry and Linda M. Lampkin, "An Agenda for Action: Outcome Management for Nonprofit Organizations," (Washington DC: The Urban Institute, 2001).
1
monitoring and evaluation and to an extent, organizational accountability.
Some of the processes that evolved to address these concerns include the
development of codes of conduct, benchmarks and standards that enhance
operations4. NGOs setup partnership networks in their different fields to share and
learn from common experiences. An example of such a network in the US is
InterAction, a coalition of more than 175 humanitarian organizations working on
disaster relief, refugee-assistance, and sustainable development programs worldwide.
While such partnerships provided large amounts of information, their impact was all
but lost as organizations struggled to assimilate this knowledge.
The increasing demand on NGOs to provide more services with a higher level
of competition for funds has created challenges for the organizations pushing them to
find ways to become more effective and provide greater social and economic impact.
According to Margaret Plantz et al, the nonprofit sector has been measuring certain
aspects of performance for several decades – these include financial accountability,
inputs, cost, program products or outputs, adherence to quality in service delivery and
client satisfaction5. The authors suggest that while these measures yield critical
information about the services the nonprofits are providing, they seldom reveal
whether the NGO’s efforts made a difference. In other words, was anyone better off
as a result of the service from the NGO? Consequently, they encouraged NGOs must
engage in effective planning and management. This requires systematic assessments
4 Koenraad Van Brabent, "Organizational and Institutional Learning in the Humanitarian Sector: Opening the Dialogue," (London: Overseas Development Institute, 1997). 5 Margaret C. Plantz, Martha Taylor Greenway, and Michael Hendricks, "Outcome Measurement: Showing Results in the Nonprofit Sector," New Directions for Program Evaluation, no. 75 (1997).
2
of past activities and their results and utilizing the learning for informed decision-
making. Strengthening organizational capacity for evaluation and learning systems
continue to be growing concerns. Paul Light notes that today NGOs have to make
strategic allocation of resources to learning6. He states that much of the “lean and
mean” rhetoric that preoccupied private firms and government agencies during the
1980s and early 1990s has now filtered over to the nonprofit sector. While NGOs
devote more time to service delivery than program evaluation, even less is devoted to
learning from these evaluations.
The Problem: the under-utilization of evaluation in NGOs
Within the NGO sector it was only in the late 80s, under increasing pressure
from donors agencies, that there began an earnest attempt to examine the quality of
evaluation utilization7. Given that billions of dollars have been spent by NGOs over
the last decade on projects and millions spent on their evaluations, why has it been so
difficult to setup a process of critical reflection and learn from their experience? With
increasing competition for funding and growing societal problems, how does one
distinguish effective from ineffective, efficient from inefficient programs? How can
organizations avoid expending precious resources on an evaluation to produce reports
that gather dust on bookshelves, unread and more importantly unused?
6 Paul C. Light, Making Nonprofits Work: A Report on the Tides of Nonprofit Management Reform (Washington, DC: The Aspen Institute Brooking Institution Press, 2000). 7 The terms “use” and “utilization” are applied interchangeably.
3
Several reasons emerge as to why agencies don’t maximize the use of
evaluation findings. They range all the way from inept and badly conducted
evaluations to a deliberate attempt by organization decision-makers to ignore findings
and recommendations as it may undercut their program plans8. NGO evaluations were
found to have inadequate information to support decision-making. Key deficiencies
identified that the methodological set-up of evaluations, data collection methods,
limited attention for cross cutting issues and broader lessons learned are not well
addressed. Moreover, in the absence of formal, structured follow-up procedures when
the evaluation report is completed, it falls into the organizational abyss: low priority,
neglect and indifference among the potential users. Evaluations often are viewed as
an onerous box to check rather than an opportunity to inform program decision-
making. An Organization for Economic Cooperation and Development (OECD)
commissioned report comparing developmental NGOs’ evaluations concluded that
most evaluations lacked follow-up because they were commissioned by donors
without the participation of NGO staff9. These evaluation results were more geared
towards decisions on funding rather than critical assessments of the programs.
While the literature of program evaluation has made significant advances in
identifying the factors that influence use very little of their recommendations have
crossed over and been applied into NGO practice. The primary reason for this has
been that NGOs do not have a simple, practical framework that guides then towards
increasing utilization. Also often facing a scarcity of resources, in funds and
8 R. C. Riddell et al., "Searching for Impact and Methods: Ngo Evaluation Synthesis Study," (OECD/DAC Expert Group, 1997). 9 Ibid.
4
personnel, NGOs are overwhelmed by the lists of factors that have to be addressed
and the complexity of processes that need to be established to maximize evaluation
use10. Recent research evinces a call for a way to make evaluation utilization more
simple and scalable; whereby there is a simplified framework for use that enhances
NGO strengths and mitigates their constraints.
Purpose of this Research
The purpose of this research was to develop a practical model that enables NGOs to
maximize evaluation use. The study examined the factors that influence evaluation
use and explored the challenges NGOs face in adapting learning practices and
systems that enable use. While there has been much theoretical work done on
evaluation use and learning in general how NGOs can build systems and practices to
promote use has been missing. The research addressed this gap – it developed a utility
model that identifies the key factors that influence use and the practical steps NGOs
can take to implement the model. To get at the answers, the research reviewed the
theoretical models - within evaluation and organizational theory - that promote use;
conducted a survey to understand the current state of use within the NGO sector and
the systems that provide an effective link between doing evaluations, knowing the
results and learning from them. Within this frame it explored the following sequence
of questions:
10 Vic Murray, "The State of Evaluation Tools and Systems for Nonprofit Organiations," New Directions for Philanthropic Fundraising, no. 31 (2001).
5
(1) What is evaluation use?
(2) What are the factors that influence evaluation use?
(3) How are these factors applied within the NGO sector?
(4) What are the challenges in promoting use in NGOs?
(5) What are the processes and systems that can increase evaluation utilization in
NGOs?
6
Methodology
In order to understand the motivators and inhibitors of evaluation utilization, this
research began with a review of the literature to discover what others have suggested
might be factors that influence use. Chapter 3 elaborates on the theories and works of
these authors. Methods included the review of existing documentation and survey of
NGO staff - including program members and senior management. Primary and
secondary documents included published and unpublished works about evaluation
use; organizational learning and NGO case studies of tracking use written by
scholars, practitioners, evaluation consultants and NGOs.
A survey was conducted in order to gather first-hand, primary evidence of the types
of factors that influence use and to better understand the processes and systems that
promote use. Altogether, 111 respondents from 40 NGOs provided background and
relevant data that contributed significantly to the creation of the utility model. The
purpose of the survey was not only to validate the utilization factors that emerged
from the literature but also identify what might be additional necessary but missing
factors as seen from within the NGO sector.
The single structure and content of the survey maintained uniformity among
respondents. The survey was semi-structured with several open-ended questions.
After several rounds of communication with potential respondents to explain the
purpose of this research and the survey and gauge their willingness to participate, the
7
survey was sent electronically. Survey of practitioners served to flesh out the
intricacies of use within different types of organizations; the political dimensions of
use in decision-making and the system they identified as essential to promote use.
In pursuing information through documentation and survey, the author studied the
general utilization ecosystem in NGOs at the program and organizational levels.
NGOs were chosen for the survey through purposive sampling. This research targeted
US based NGOs, with an international programmatic focus. Within the domain of
purposive sampling – a combination of Expert and Snowball methods were used.
Expert sampling involves the assembling of a sample of persons with known or
demonstrable experience and expertise in some area. Sampling for the survey first
targeted staff in NGOs who are program experts and have a close knowledge of
evaluations. Snowball methods were used to expand the participants list within
NGOs. First respondents from NGOs were asked to recommend others who they may
know who also meet the criteria. Although this method does not lead to representative
samples, there were useful to reach populations that might have provided multiple
perspectives from within the same organization. Details on the sampling are provided
in the Methodology chapter.
8
Theories that Frame Research
Evaluation is an elastic word that stretches to cover judgments of many
kinds11. While there can be n categories and dimensions of evaluation what they all
share in common is the notion of judging merit – simply put, it is weighing an event
against some explicit or implicit yardstick. Since the 1960s when the practice of
evaluation emerged with academic rigor there has been a systematic push to mold and
shape its content. This effort successfully delivered the different approaches to
evaluation; structures of data collections and guidelines to practice. However, what
lagged behind was the understanding of how best to use the findings of evaluations.
While people were focused on the mechanics of conducting a good evaluation they
left the results to automatically affect decisions. Why would an organization spend
time and resources to conduct an evaluation if it didn’t intend to use the results? One
can argue that if a comprehensive evaluation was done and the report presented in a
clear manner the results will be used for program decision-making. Unfortunately,
this is not what happens in reality.
It wasn’t until the late 1970s that evaluation theorists found many factors that
intervened between the completion of an evaluation study and its application to
practice12. Michael Quinn Patton’s theoretical framework of evaluation-utilization
identified patterns and regularities that provide a better understanding on where, by
11 Carol H. Weiss, Evaluation Research: Methods for Assessing Program Effectiveness (New Jersey: Prentice-Hall, 1972). 12 Leonard Rutman, Evaluation Research Methods: A Basic Guide, 2d.ed. (Beverly Hills, CA: Sage Publications, 1984).
9
whom, and under what conditions evaluation results are most likely to be applied.
Carol Weiss concluded that some of the problems that plagued evaluation utilization
was inadequate preparations; practitioner suspicion and resistance; access to data;
limited time for follow-up; and inadequacies of money and staffing. Despite
differences in the emphasis and approaches to useful evaluations, the common theme
from various researches is based on the premise that the primary purpose of
evaluations is its contribution to the rationalization of decision-making13.
While evaluation theorists were struggling to understand utilization, the NGO
management theorists were fighting their own battles with efficiency and
organizational performance. There has been a steady stream of experimentation with
specific methods, especially those focusing on participatory approaches to M&E and
impact assessment. A number of NGOs produced their own guides on monitoring and
evaluation. Recent books on NGO management are giving specific attention to
assessing performance and the management of information14. As well as doing their
own evaluations, some NGOs are now doing meta-evaluations (of methods) and
syntheses (of results) of their evaluations to date15. Similar but larger scale studies
have been commissioned by bilateral funding agencies. All of these efforts have
attempted to develop a wider perspective on NGO effectiveness, looking beyond
individual projects, across sectors and country programs. Overall, NGOs have
become much more aware of the need for evaluation, compared to the 1980s when
13 M. C. Alkin et al., "Evaluation and Decision Making: The Title VII Experience," in CSE Monograph No. 4 (Los Angeles: UCLA Center for the Study of Evaluation, 1974). 14 Vandana Desai and Robert Potter, The Companion to Development Studies (London: Arnold, 2002). 15 A. Fowler, Striking a Balance: A Guide to Enhancing the Effectiveness of Non-Governmental Organizations in International Development (London: Earthscan, 1997).
10
there was some outright hostility16. However, there is still a struggle on how best to
structure processes within the organization to increase utilization.
Organization Learning (OL) literature can provide evaluation use researchers
a helpful framework for understanding and creating cultural and structural change and
promoting long-term adaptation and learning in complex organizations operating in
dynamic environments. Constructs within OL literature provide several links between
evaluation utilization practices and learning in organizations. It focuses on fostering
learning by increasing utilization processes into the everyday practices, leadership,
communication and culture of the organization – staff becoming involved in the
evaluation process and increasing staff interest and ability in exploring critical issues
using evaluation logic.
Drawing from these theories this study explored how NGOs can increase the
utilization of evaluation findings to affect program and organization effectiveness.
16 M. Howes, "Linking Paradigms and Practise, Key Issues in the Appraisal, Monitoring and Evaluation of British Ngo Projects," Journal of International Development 4, no. 4 (1992).
11
Research Findings and Conclusion
This thesis developed an evaluation utility model that NGOs can implement to
increase use. It addresses a key gap that has existed in the field and has moved the
dialogue on evaluation utilization forward by identifying the key factors that
influence use and by providing a practical framework that highlights the inter-
relatedness of these factors.
The model presents a fundamental shift in how NGOs must approach program
evaluation. It challenges the conventional thinking in the NGO sector with the notion
that it is no longer sufficient to focus on use only at the program level. The utility
model revealed that influencing factors must extend to include the larger context of
organizational behavior and learning. This is a significant contribution to the current
understanding and derives strongly from the survey of practitioners. Specifically, the
primary research highlighted that evaluation use is a multi-dimensional phenomenon
that is interdependent with human, evaluation and organizational factors. Within this
context, the utilization process is not a static, linear process – but one that is dynamic,
open and multi-dimensional – driven by relevance, quality and rigor. The model
attempts to capture this environment focused on the central premise that whether an
evaluation is formative or summative, internal or external, scientific, qualitative or
participatory the primary reason for conducting evaluations is to increase the
rationality of decision-making. The model challenges NGOs to make evaluation
utilization an essential function of its operations and offers practical steps on how
organizations can operationalize this. This model adds to the knowledge of evaluation
12
use in NGOs by expanding its focus from being restricted to the program level to
include the external realities at the organization level.
Dissertation Organization
The six chapters of the dissertation are set forth as follows. Chapter 1, the
Introduction, has briefly described the context of the study. Chapter 2 outlines the
methodology used to answer the research questions in this study. The chapter
discusses the rationale behind the survey and data collection methods. This is
followed by the research questions to be examined. The detailed review of literatures
will be presented in Chapter 3. This chapter examines the theories and models around
evaluation utilization, organizational learning and NGO evaluation practice. Chapter
4 presents the findings of the survey of NGOs. Chapter 5 presents the utility model
and draws together the summary findings of this research. This forms the meat of the
analysis, explaining how the theories reviewed and survey results respond to the
research questions. The Conclusion, Chapter 6 provides an interpretation of the
research findings along with suggestions for future research.
13
Chapter 2: Methodology
This chapter explains the methodology that was used to study evaluation
utilization in NGOs. It begins with a discussion of the research questions that were
explored in the study. Second, it describes the research structure – theories explored
and data collection strategies used. Third, it describes how the data were analyzed.
Fourth, it addresses limitations of the methodology.
Proposition and Research Questions
Research Proposition: In NGOs, successful utilization results when the principles of
use are embedded throughout the lifecycle of an evaluation – planning,
implementation and follow-up.
Research Questions
What is Evaluation Use?
This research began by exploring the concept of evaluation use. What are the
theoretical origins of use? How has it evolved? How is it measured? When do we
know use occurs? Answering these questions provided the foundation for
understanding what are be the mechanisms to achieve and maximize use.
What are the factors that influence evaluation use?
Every phenomenon has factors that trigger its behavior. While seeking to understand
the processes to increase evaluation utilization, this study examined the push and pull
14
factors that influence use. What are these factors? Is there a pattern in how they are
manifested? What is the relationship among them?
How are these factors applied within the NGO sector? What are the challenges in
promoting use in NGOs?
As this study focused on the NGO sector, it was important to understand how the
factors of use are currently operationalized? Why do certain factors result in use and
others don’t? How are NGOs tracking use? What are the barriers to use? What are
their attempts to overcome these barriers?
What are the processes and systems that can increase evaluation utilization in
NGOs?
This is the final answer that this research ascertained. If there are certain factors that
help to maximize use, then how can they be triggered to achieve the results? What
must an NGO do to build and/or strengthen these triggers? What are the challenges in
implementing such processes and systems? How can they be mitigated?
15
Research Structure
The research was conducted in three stages:
1. Theoretical review – A review of evaluation theory; NGO program evaluation
practice and organizational learning literature.
2. Survey of NGOs – to gather descriptive information on the extent and type of
evaluation utilization occurring in NGOs; assess the key factors identified by
the literature and understand the systems NGOs employ to promote use.
3. Development of utility model – drawing from the data collected, an evaluation
utility model for NGOs was developed along with a list of practical steps that
can be implemented to increase utilization.
Below is a representation of where the research questions were covered between
the first two stages. As evident from the list below, this research was reliant to a
large extent on literature review. However, the survey provided an important
aspect of validating the factors that influence use and identifying the processes
that trigger their effectiveness and the barriers that inhibit them.
16
Questions Data Collection
What is evaluation use? Literature review
What are the factors that influence
evaluation use?
Literature review
How are these factors applied within the
NGO sector?
Survey & Literature review
What are the challenges in promoting use
in NGOs?
Survey & Literature review
What are the processes and systems that
can increase evaluation utilization in
NGOs?
Survey & Literature review
17
Stage 1: Theoretical Review
This phase involved an in-depth examination of the different utility models to
identify correlations between evaluation theory and NGO practice and develop a
systems understanding of evaluation use. Books and journals contributed to nearly
90% of the theoretical review. The rest was supplemented by online references. Key
journals included:
• American Journal of Evaluation
• Evaluation Practice
• Evaluation and Program Planning
• Journal of Management Studies
• Nonprofit Management and Leadership
• Nonprofit and Voluntary Sector Quarterly
First, a study of evaluation theory was undertaken. The university of
Minnesota archives (Wilson Library in Humanities and Social Science) provided a
valuable microfiche of literature dating back to the 1970s. This formed the basis for
further exploration to build a comprehensive bibliography. While the central books
that defined evaluation theory were easy to obtain, it was a challenge to track down
some key and relevant articles published in conferences and journals that are now
discontinued. These were subsequently obtained from the online databases of the
18
Evaluation Center at the Western Michigan University17 and the American Journal of
Evaluation18. The theory of evaluation use is presented in a historical review rather
than thematic because the concept of evaluation utilization had been an underlying
theme from the early years and had only emerged as a distinct sub-branch in the late
1990s. Also, this approach gave a clearer understanding of the challenges and key
revisions that helped shape the utilization models as they evolved.
Mining the literature around NGO evaluation practice was a bit more
circuitous as there are not many dedicated researchers in this space. This research
started out in the context of NGO program management, developing an understanding
about program rationale and decision making. In order to understand the challenges
of program evaluation use it was important to first understand what drives program
decision making and the internal dynamics of organizational management. Questions
explored here include how do NGOs decide on programs? What are the
organizational structures that enable effective program management? What are the
models of decision-making? The interest was to explore the extent to which NGOs
incorporate the concept of utilization into their practice and understand the practical
challenges to effective use. To this effect, this study draws on the earlier work done
by networks like InterAction and ALNAP.
InterAction is the largest coalition of U.S.-based international NGOs focused on the
world’s poor and most vulnerable people. Collectively, InterAction’s members work
in every developing country. The U.S. contributions to InterAction members totals to
17 "The Evaluation Center," www.wmich/edu/evalctr/. 18 "The American Journal of Evaluation," aje.sagepub.com.
19
around $6 billion annually. InterAction’s comparative advantage rests on the uniquely
field and practitioner-based expertise of its members, who assist in compiling data on
the impact of NGO programs as a basis for promoting best practices and for evidence-
based public policy formulation. The Active Learning Network for Accountability
and Performance in Humanitarian Action (ALNAP) was established in 1997,
following the multi-agency evaluation of the Rwanda genocide. It is a collective
response by the humanitarian sector, dedicated to improving humanitarian
performance through increased learning and accountability. ALNAP is a unique
network in that its 60 members include donors, NGOs, UN and academic institutions.
The network’s objective is to improve organization performance through learning and
accountability. ALNAP’s key initiative, the Evaluative Reports Database, was
created to facilitate information sharing and lesson learning amongst humanitarian
organizations. Evaluative Reports are submitted to the database by member
organizations and made available online. Findings from reports in the database are
regularly distilled and circulated to a wider audience through ALNAP’s publications.
Information from these two networks contributed to this study on two fronts:
(a) To provide a strong list of candidates for the survey - NGOs that are interested
and committed to improvements around evaluations use.
(b) Their research provided a rich background to understand the challenges NGOs
face in planning and implementing evaluation use.
20
The final review was in the field of Organizational Learning. The focus of this study
within the vast literature of OL was to understand what organizations, in this case
NGOs, can do in a practical systems way to increase utilization and learning. While
there is a whole branch of study that revolves around building a learning organization
– this research was focused on understanding the organization and individual
indicators necessary to drive effective evaluation use; the key constructs of OL and its
linkages with evaluation utilization.
21
Stage 2: Survey
A Survey of NGOs was conducted to understand the extent and type of evaluation
utilization occurring in NGOs as well as assess the key factors, identified by the
literature, as influencing use. The survey data was collected over a year (2005). The
survey questionnaire contained sections pertaining to evaluation use, organizational
learning within the framework of NGO practice. Appendix A contains the survey
questionnaire.
Data Collection
Characteristics of NGOs Surveyed
NGOs vary in many different ways - in the size, type of services provided and
geographic location. This survey targeted organizations based upon the following two
categories: Those with
• an international program focus and a presence in the United States
• a strong program evaluation practice
Table 2.1 depicts the breakdown of the primary programming context of the
organizations. As shown below, about one third of the surveyed NGOs were primarily
concerned with economic development followed by 19% with health, disaster
response at 15%, environment at 13% and human rights and social development at
10%. The bottom in the list, with 5% each was education. 5% of the respondents
specified other categories. All of these however, seem more like activities or
22
strategies that the organizations use to achieve their objectives. They are not
programming contexts. For example, an organization could be using advocacy or
research to work in the context of human rights and social development. Here are the
responses mapped to the organization of the respondent.
Civil society (CARE), advocacy (Conservation International), research based
advocacy (Earth Watch Institute), research and policy (Physicians for Human Rights)
and campaigning (World Wildlife Fund), it becomes clear that they could map onto
any of the options provided in the response list.
Table 2.1 Primary programming contexts of organizations participating in the survey
#4: How would you categorize the overall programming of the NGO, which is
the context for your responses? (please select only the most appropriate)
Answer Options
Response
Percent
Response
Count
Disaster Response / Humanitarian Assistance 15% 17
Economic Development 34% 38
Environment 13% 14
Education 5% 5
Human Rights and Social Development 10% 11
Health 19% 21
Other (please specify) 5% 5
100% 111
Other (please specify)
Civil Society
23
Advocacy
research based advocacy
Research and Policy
Campaigning
Selection of NGOs and respondents
NGOs were chosen for the survey through purposive sampling. In purposive
sampling, the sample is selected with a purpose in mind that seeks one or more
specific predefined groups. This research targeted US based NGOs with an
international program focus that have an active engagement in evaluation
improvement. The first step was to verify if the NGOs meet the criteria for being in
the sample.
(1) First a master list of all US based NGOs who work in the above issues areas and
have an international focus was created from the IRS Exempt Database registry
(resulting in 492 organizations – Appendix B)19.
(2) To identify organizations within this master that have an active
engagement/interest in evaluation improvement and learning, the list was cross-
referenced with member lists from ALNAP and InterAction networks to create a
short list of 163 NGOs. (Appendix C)
(3) These 163 NGOs were put in a column in a spreadsheet. Then a second column of
random numbers was generated from EXCEL’s random number generator. By
sorting using the second column as the sort key the NGO names were put in a
random order. 19 "Internal Revenue Service - Charities and Non-Profits (Extract Date October 4, 2005)," http://www.irs.gov/charities/article/0,,id=96136,00.html.
24
(4) The first 100 NGOs were then contacted via email to request participation in the
survey. Of these the acceptance rate was 27% - 27 organizations.
(5) To increase the participation rate, the next 50 NGOs on the list were contacted.
From these 13 organizations accepted to participate.
(6) This resulted in a final response count of 40 NGOs and 111 respondents.
Within the domain of purposive sampling – a combination of Expert and Snowball
methods were used to solicit survey respondents. Sampling for the survey first
targeted staff in NGOs who are program experts and have a close knowledge of
evaluations. Targeted personnel within the NGOs were program staff (e.g.: Program
Officers) and program senior management (e.g.: Program Director, Vice President).
The focus was on those who were either directly involved in program evaluation
and/or management and also for them to have been working in the NGO program
context for over 6 months. The advantage of doing this is to get to those individuals
who understand the issue of program evaluation use. The disadvantage is that even
these “experts” can be wrong in their assessments.
In Table 2.2 below there is a breakdown of the respondents’ professional level within
the organization. Nearly 86% of the respondents were Program staff either as
managers or team members; and 11% identified themselves as part of the senior
management team. From a decision-making lens, assuming that program managers
make decisions about their programs, there is almost 49% representation of decision-
25
makers in the survey (adding the program managers 38% and senior management
11%).
Table 2.2 - Role of the survey respondents
#5: Please select one option that relates closely to your current role.
Answer Options
Response
Percent
Response
Count
Program Manager 38% 42
Program Team Member 48% 53
Senior Management (Director and
above) 11% 12
Board Member 0% 0
Other (please specify) 3% 3
100% 111
Other (please specify)
Operations team – not programs
Advocacy officer
Evaluations manager
Table 2.3 below shows the experience level of respondents with NGO programs. This
can lead to assess the level of understanding they bring about the program
evaluations, their use and the barriers to use. Over half the respondents have
significant number of years in programming. Only 4% identified as less than a year.
26
Table 2.3 – Experience level of survey respondents
#3: No. of years experience with NGO
programs?
Answer Options
Response
Percent
Response
Count
Less than 1 year 4% 4
Between 1 – 5 years 31% 34
Between 5 - 10 years 15% 17
Over 10 years 50% 56
100% 111
Snowball methods were used to expand the participants list within NGOs. Although
the survey was addressed to a specific expert at each organization, often within the
program team, they were requested to forward the survey to others within the
organization involved with evaluation and program decision-making. While the first
point of contact was pre-determined the subsequent respondents were not pre-
selected. This approach yielded multiple respondents within several organizations
contributing to a potentially diverse understanding of evaluation utilization practices
in a specific context as opposed to if there was only one respondent for each
organization. However, even in this situation there is a possibility that all of the
respondents could have collectively provided an unbalanced depiction of evaluation
use within the organization
27
Table 2.4 below shows the distribution of the 111 respondents across 40 NGOs.
There are 4 NGOs which had 4 respondents each; 27 NGOs had 3 each; 5 NGOs had
2 each and 4 NGOs had one respondent.
Table 2.4 – Participating organizations along with the number of respondents from each organization Organization Name # of
respondents
Organization Name # of
respondents
ActionAid International 2 Jesuit Refugee Service 3
Advocacy Institute 1 Mercy Corps 3
Africare 3 National Committee on American
Foreign Policy
1
American Red Cross 3 Open Society Institute 3
American Refugee
Committee
3 OXFAM 3
CARE International 4 PACT 4
Catholic Relief Services 3 Pan American Health Organization 1
CONCERN Worldwide 2 Peace Corps 3
Conservation International 3 Physicians for Human Rights 3
Doctors without Borders 3 Population Services International 3
Earth Watch Institute 3 Refugees International 3
Global Fund for Women 3 Salvation Army World Service Office 3
Grassroots International 2 Save the Children 4
Habitat for Humanity 3 The Lutheran World Federation 3
28
International
Heifer International 3 Unitarian Universalist Service
Committee
3
Human Rights Watch 3 Weatherhead Center for International
Affairs
2
Institute for Sustainable
Communities
1 Women's Commission for Refugees 3
Interaction 3 World Council of Churches 3
International Rescue
Committee
3 World Vision 4
IPAS – USA 2 World Wildlife Fund 3
Process for soliciting participants from NGOs
An initial email explaining the context for the research.
Whenever there was any contact information available, there was a follow-up
phone call to clarify questions and to also ensure that the participant was directly
involved in evaluation or program management.
Then the link to the online survey was shared. In several cases, the initial contact
person declined to participate or referred other staff within the organization that
was a better fit with the research.
It took over a year from starting to source participants to when the surveys were all
completed. The main reasons cited by participants on their interest in this study are as
follows:
29
o They acknowledge the problem of evaluation under-utilization
o To share their internal systems/approaches
o To learn what systems they can put in place to improve use
Development and Distribution of the Survey
Among the several online survey tools, SurveyMonkey.com was selected
primarily for its ease of design and robust functionality. Survey questions can be
divided into two broad types: structured and unstructured. Within the structured
format there are (1) dichotomous questions – Yes/No; Agree/Disagree responses and
(2) questions based on a level of measurement/ranking. Respondents were also
allowed to comment on most questions to capture options that may have been
overlooked. Unstructured questions are open-ended to gather respondent perspectives
on specific issues.
To ensure content validity and technology functions the survey was pretested
with 4 organizations – CONCERN Worldwide, Human Rights Watch, International
Rescue Committee and Jesuit Refugee Services. Pre-testers were asked to answer the
following five questions:
(1) How long did it take you to complete the survey?
(2) Did you find any questions confusing (in terms of grammar, vocabulary, etc.)? If
so, what was confusing?
30
(3) Did you find any answer choices confusing (in terms of grammar, vocabulary,
etc.)? If so, what was confusing?
(4) Are there any significant questions you feel should have been asked in context
that is omitted?
(5) Please comment on the technical functionality of accessing and completing the
survey online.
(6) Is there anything else you feel would be helpful for this research?
The pre-test confirmed that respondents could complete the survey within 15 – 20
minutes. As a result of the pre-test changes, no new questions were added but open-
ended comment fields were added to some questions to capture options not provided
in the choices. Organizations that participated in the pre-test also completed the final
survey.
31
Data Analysis
Quantitative data was entered imported into Microsoft Excel from
SurveyMonkey.com and analyzed using basic descriptive statistics such as
frequencies and cross-tabulations as well as measures of central tendency where
appropriate. Qualitative data from open-ended questions were analyzed using an
inductive process to identify key themes. Content analysis was used to identify, code,
and categorize the primary patterns in the data20. This is a research method that uses a
set of procedures to make valid inferences from text. The rules of the inferential
process vary according to the theoretical and substantive interests of the investigator.
It is often used to code open-ended questions in surveys.
20 Kimberly A. Neuendorf, "The Content Analysis Guidebook Online," <http://academic.csuohio.edu/kneuendorf/content/>.
32
Limitations to Survey
The limitations can be grouped into three categories: technical, human and logistical.
Technical: Since the survey used many definitions there was a threat that they
were inadequate or inaccurate representations of meaning. Efforts were made to
minimize this threat by pre-testing the survey. Also, in some cases respondents
were allowed to add to the choices to provide increased flexibility. The second
threat was that the sample set was small and targeted. As a result, research
findings may not be generalized widely within the NGO sector. Nevertheless, the
research provides a unique opportunity to test and refine the data collection
instruments for future use in larger studies that utilize random sample selection.
Human: first, because respondents were selected based on their proximity to
program evaluations there was a possibility that they would appear to use
evaluation findings in their decision-making. To minimize this, the survey
ensured respondent confidentiality. Several respondents completed the survey
anonymously. Second, respondent bias and inaccurate representation of utilization
experiences is a potential limitation. This was minimized to a certain extent by
ensuring there were multiple respondents from each organization to provide, as
much as possible, a balanced interpretation.
Logistical: The main limitation here was the potentially low response rate. To
increase the likelihood of responses, initial contacts within the organizations were
requested to suggest others who could participate and the survey was provided
online to allow for ease.
33
Chapter 3: Literature review
Evaluation Utilization
Definitions
What is an evaluation?
A simple definition of the term “evaluation” is the systematic determination of the
quality or value of something. Evaluation may be done for the purpose of
improvement, to help make decisions about the best course of action, and/or to learn
about the reasons for successes and failures. Even though the context of an evaluation
can vary dramatically, it has a common methodology which includes21:
(1) Systematic analysis to determine what criteria distinguish high quality/value from
low quality/value;
(2) Further research to ascertain what levels of performance should constitute
excellent vs. mediocre vs. poor performance on those criteria;
(3) Measure performance; and
(4) Combine all of the above information to make judgments about the validity of the
information and of inferences we derive from it.
21 Carol Weiss, Evaluation, 2nd ed. (Saddle River, NJ: Prentice Hall, 1997).
34
Approaches to Evaluation
Over the years, evaluators have borrowed from different fields of study to shape the
approaches and strategies for conducting evaluations. The three major approaches are
presented below:
Scientific-experimental approach: Derive their methods from the pure and
the social sciences. They focus on the need for objectivity in their methods,
reliability, and validity of the information and data that is generated. Most
prominent examples of the scientific-experimental models of evaluation are
the various types of experimental and quasi-experimental approaches to data
gathering22.
Qualitative/anthropological approach: Emphasizes the importance of
observation and the value of subjective human interpretation in the evaluation
process. Included in this category are the various approaches known in
evaluation as naturalistic inquiry, where the paradigm allows for the study of
phenomena within its natural setting23.
Participant-oriented approach: Emphasize the importance of the participants
in the process, especially the beneficiaries or users of the object of evaluation.
User and utilization-focused, client-centered, and stakeholder-based
approaches are examples of participant-oriented models of evaluation24. A
basic tenet of utilization-focused evaluation is that one must prioritize
intended users, uses, and evaluation purposes.
22 Donald T. Campbell and Julian C. Stanley, Experimental and Quasi-Experimetal Designs for Research (Chicago: Rand McNally, 1963). 23 Y. Lincoln and E. Guba, Naturalistic Inquiry (Thousand Oaks, CA: Sage Publications, 1985). 24 M.Q. Patton, Utilization-Focused Evaluation, 2nd edition ed. (Beverly Hills, CA: Sage, 1986).
35
In reality, most evaluations will blend these three approaches in various ratios to
achieve results as there is no inherent incompatibility between these broad strategies -
- each of them brings something valuable to the process.
Types of Evaluation
There are two broad categories in the types of evaluations: formative and
summative25. Formative evaluation, strengthen or improve the object being evaluated
and are undertaken when the object is active or forming -- they help by examining the
delivery of the program or product, the quality of its implementation, and the
assessment of the organizational context, personnel, procedures, inputs, and so on.
Formative evaluations are useful for various purposes.
• They may help catch problems early on, while they can still be corrected.
• They are an evaluation of process, so they may be useful in understanding
why different outcomes emerge and improving program management.
• They provide an opportunity to collect baseline data for future summative (or
"impact") evaluations.
• They help identify appropriate outcomes for summative evaluations.
25 W.R. Shadish, T.D. Cook, and L.C. Leviton, Foundations of Program Evaluation: Theories of Practice (Newbury Park, CA: Sage Publicaitons, Inc., 1991).
36
Summative evaluations, in contrast, examine the effects or outcomes -- they
summarize it by describing what happens subsequent to delivery of the program or
product; assessing whether the object of the evaluation can be said to have caused the
outcome; determining the overall impact of the causal factor beyond only the
immediate target outcomes; and, estimating the relative costs associated with the
object. Some advantages of summative evaluations include:
• They can provide evidence for a cause-and-effect relationship.
• They assess long-term effects – provide data on change across time.
• They can be effective to measure impact.
• They measure cost-effectiveness to address the questions of efficiency.
• They allow for a secondary analysis of existing data to address new questions
or uses.
• They offer a meta-evaluation that integrates the outcomes from multiple
studies to arrive at a summary judgment on an evaluation question.
Evaluations can be internal, undertaken by program or organizational staff. However,
there are occasions when it is useful and important to conduct an external evaluation,
such as when you want to learn about the longer term impact of a program in relation
to the broader issues in the field. Some of the advantages and disadvantages of
conducting internal and external evaluations are outlined below.
37
Table 3.1 – Advantages/Disadvantages of Internal and External Evaluations
Type of
Evaluator
Advantages Disadvantages
Internal Familiarity with the program and
will need less time to learn about
the organization and its interests.
May know the program too well and find it
difficult to be objective. Also, they may not
have any specific evaluation training or
experience.
Known to staff and therefore is
less of a threat.
The evaluator could hold a position of power
and authority and personal gain may influence
his or her findings and/or recommendations.
External Not personally involved in the
program can therefore be more
objective when collecting and
analyzing data and presenting the
results.
May cause anxiety among program staff when
they are unsure of the motives of the
evaluation/evaluator.
The outsider is not a part of the
power structure.
An outsider may not fully understand the
goals and objectives of the program or its
context.
The external evaluator can take a
fresh look at the program or
organization.
An external evaluation can be expensive, time
consuming, and disruptive of ongoing
progress.
38
Evaluation Use
For the purpose of this research the definition of evaluation use is derived from
Weiss’s 1966 paper, though four decades old, its relevance still rings true.
“The basic rationale for evaluation is that it provides information for action. Its
primary justification is that it contributes to the rationalization of decision making.
Although it can serve such other functions as knowledge-building and theory-testing,
unless it gains serious hearing when program decisions are made, it fails in its major
purpose.”
Types of Use
In a simplistic view of utilization we can say that anytime, anyone uses anything from
an evaluation for any purpose that is utilization. With this lens one can argue that
utilization occurs in almost every case. On the other end is the restrictive view that
says utilization occurs only when an intended user makes a specific decision
immediately following the evaluation report and based solely on the findings of that
report. The spectrum of evaluation use can be grouped into the following categories26:
(1) Instrumental – brings about changes in practice and procedures as a direct
result of the evaluation findings. Change occurs through specific action.
26 Marvin Alkin, Richard Daillak, and Peter White, Using Evaluations: Does Evaluation Make a Difference? (Beverly Hills: Sage Publications, 1979).
39
Evidence for this type of utilization involves decisions and actions that arise
from the evaluation, including the implementation of recommendations.
(2) Conceptual – is more indirect and relates to an increased understanding of the
topic. This type of use occurs first in the thoughts and feelings of
stakeholders. Over time achieving conceptual use can lead to more actionable
instrumental use.
(3) Symbolic – is when an evaluation is conducted merely to demonstrate
compliance to an external factor or to justify a pre-existing position of an
agency. For example, an evaluation is conducted with no intention of utilizing
the findings but merely to justify program decisions already made.
(4) Strategic - is to persuade others or to use evaluation findings to gain particular
outcomes27. Often seen when findings influence decisions beyond the scope of
the evaluation. For example, change the course of programming or inform the
larger strategic vision of the organization.
(5) Process - ways in which being engaged in the processes of evaluation can be
useful quite apart from the findings that may emerge from these processes. It
could lead to changes in beliefs and behaviors of participants and ultimately
lead to organizational change.
27 W.R. Shadish, T.D. Cook, and L.C. Leviton, Foundations of Program Evaluation: Theories of Practice (Newbury Park, CA: Sage Publicaitons, Inc., 1991).
40
1960s through 1970s: The Foundation Years
The prominence of evaluation research in the late 1960s and early 1970s can
be attributed to studies at that time which document the low degree of utilization of
social research data in policy making and program improvement in governmental
operations. The mainstream view was that evaluations seldom influence program
decision-making and there was little hope that evaluation will ever have any real
impact on programs. Therefore the initial debates were on whether evaluations did in
fact make a difference?
Carol Weiss’s 1966 paper “Utilization of Evaluation: Toward Comparative
Study” signaled the beginning of the organized study of evaluation utilization28. In
this Weiss laid out what was later widely accepted as the primary argument for doing
evaluations: to increase the rationality of program decision-making. Measuring by
this standard Weiss not only found some instances of effective utilization but also
observed a high standard of non-utilization. In presenting the factors that might
account for this non-utilization she focused on two main categories:
• organizational systems and
• evaluation practice
By organizational systems she refers to the informal goals and social structures
influencing decision-making that are often overlooked by classic evaluation models
geared towards formal goals of the organization. Weiss also strongly criticized the
28 Alkin, Daillak, and White, Using Evaluations: Does Evaluation Make a Difference?
41
evaluation practice at that time of “…inadequate academic preparation…low status in
academic circles….practitioner resistance…inadequate time to follow-
up...inadequacies of money and staffing....etc”29 She established the need for a
systematic study of conditions and factors associated with the utilization of evaluation
results. Her initial groupings included not only organizational and political factors but
also more practical, technical and operational factors. Weiss’s paper generated much
excitement in the evaluation circles and it was the impetus to more rigorous research
on further categorization of potential factors -- from other fields of study like
education theory, decision theory, organizational theory and communication theory30
-- and how they aid or impede utilization.
The second stage of advancement in the study of evaluation utilization came
in the mid-70s. Researchers, Marvin Alkin and Michael Quinn Patton, working
separately came up with a more comprehensive listing of potential utilization factors.
But the short-coming of these lists was that they came from a theoretical base rather
than from any empirical evidence.31 It was not until the late 70s that the factors drawn
out of program research were published. Through large-scale surveys, smaller
interview studies, case studies, observations and collection of anecdotes researchers
further discovered how prospective users made use of research and evaluation
findings. The mainstream view on evaluation held before the 1960s now began
29 Carol H. Weiss, ed. Utilization of Evaluation: Toward Comparative Study, Evaluating Action Programs: Readings in Social Action and Education (Boston: Allyn and Bacon,1972). 30 H. R. Davis and S. E. Salasin, eds., The Utilization of Evaluation, vol. 1, Handbook of Evaluation Research (Beverly Hills: Sage Publications,1975). 31 Scarvia B. Anderson and Samuel Ball, The Profession and Practice of Program Evaluation (San Francisco: Jossey-Bass, 1978).
42
shifting towards a different conclusion that evaluations do influence programs in
important and useful ways.32
1980s through 1990s: The rise of context in evaluation theory
In the early 1980s there was general agreement that evaluation use was a
multi-dimensional phenomenon best described by the interaction of several
dimensions, namely, the instrumental (decision support and problem solving
function), conceptual (educative function), and symbolic (political function)
dimensions.33 As researchers continued to produce indicators and predictors of use
along these dimensions, in 1986, Cousins and Shulha’s (1986) meta-analytic work
went a step further to assess the relative weight of factors in their ability to predict
use. Their findings indicated that the quality, sophistication and intensity of
evaluation methods were among the most potent in influencing the use of findings.34
This report along with Greene’s35 observations set the direction for future research,
arguing that it is not enough simply to describe different types of use and to catalogue
the contributing factors but that the real need was in specifying the relative weight of
influential factors. However, soon researchers emerged with findings that
contradicted each other, failing to establish a clear hierarchy of influential factors.
32 Carol Weiss, Social Science Research and Decision-Making (New York: Columbia University Press, 1980). 33 Lyn M. Shulha and J. Bradley Cousins, "Evaluation Use: Theory, Research and Practice since 1986," American Journal of Evaluation 18, no. 1 (1997). 34 ibid 35 Jennifer C. Greene, "Stakeholder Participation and Utilization in Program Evaluation," Evaluation Review 12, no. 2 (1988).
43
While Cousins and Leithwood emphasized evaluation methods, Levin36, applying the
same framework, concluded that contextual factors were pivotal in explaining
patterns of use. Another perspective proposed by the works of Green (1990)37, King
(1988)38 and Weiss states political activity as inextricably linked to effective use.
They argue that decision makers do not act alone and face an onslaught of decision-
relevant information from competing interests groups and changes in program
circumstances. This finding was further strengthened by Mowbray (1992)39, who
using political frames of reference described how the loss or acquisition of resources
during an evaluation significantly changed the effects of the evaluation. At the same
time, another group of researchers linked organizational structure and process to
effective use. Mathison (1994)40 and Owen and Lambert (1995)41 research also found
that the levels of bureaucracy within an organization, the lines of communication
within and across these levels and the degree of decision-making autonomy within
program units contributed to increased utility of evaluation findings.
Patton (1997) added an extra dimension to the factors influencing use by
examining the interaction between the evaluator and the program context. In arguing
that evaluations must serve the intended use of intended users, Patton positions the
36 B. Levin, "The Uses of Research: A Case Stuffy in Research and Policy," The Canadian Journal of Program Evaluation 2, no. 1 (1987). 37 J. C. Greene, "Technical Quality Vs. User Responsiveness in Evaluation Practice," Evaluation and Program Planning 13 (1990). 38 J.A. King, "Research on Evaluation and Its Implications for Evaluation Research and Practice," Studies in Educational Evaluation 14 (1998). 39 C.T. Mowbray, "The Role of Evaluation in Restructuring of the Public Mental Health System," Evaluation and Program Planning 15 (1992). 40 S. Mathison, "Rethinking the Evaluator Role: Partnerships between Organizations and Evaluators," Evaluation and Program Planning 17, no. 3 (1994). 41 J.M. Owen and F.C. Lambert, "Roles for Evaluation in Learning Organizations," Evaluation 1, no. 2 (1995).
44
evaluator in the thick of program context.42 Drawing from work of organizational
learning scholars Chris Argyris and Donald Schön, he constructed the theory of user-
focused approach to evaluation use.43 In this theory the evaluator’s task is to facilitate
intended users, including program personnel, in articulating their operating
objectives. Patton argues that by involving potential users in constructing and
planning an evaluation it creates more ownership to the results produced and thereby
increases the likelihood of use. Based on his case studies, in 1997 Patton presented a
basic framework of the Utilization focused evaluation process.
The flow of processes within this framework is as follows:
- identify intended users of the evaluation
- identify intended uses
- agreement on the methods/measures and design of the evaluation
- intended users are actively and directly involved in interpreting findings,
making judgments based on the data and generating recommendations
- Finally, the dissemination of findings to intended users
While the framework provides ample room for flexibility within different
contexts it does have a major point of vulnerability -- the turnover of primary
intended users.44 The framework depends heavily on the active engagement of
42 Shulha and Cousins, "Evaluation Use: Theory, Research and Practice since 1986." 43 Michael Quinn Patton, Utilization Focused Evaluations (Beverly Hills, CA: Sage Publications, 1997). 44 Ibid.
45
intended users that to lose users along the way to job transitions, reorganizations and
reassignments can undermine eventual use. Patton acknowledges that replacement
users who join the process late seldom come with the same agenda as those who were
present at the beginning. He offers two solutions to this problem. The first,
maintaining a large enough pool of intended users so that the departure of a few will
not impact utilization. The second option, in the event of a large scale turnover of
intended users, is to renegotiate the design and use commitments with the new set of
users. Even though this will delay the evaluation process it will payoff in eventual
use. Patton’s work set in motion some of the more innovative research on evaluation
use: how improving the evaluation process use can lead to organizational learning.
Several studies (Ayers, 198745; Patton, 199446; Preskill, 199447) have shown linkages
between intended user participations and increased personal learning, which then led
to improved program practice. With the notion of evaluation for organizational
learning attracting considerable attention, researchers started looking beyond the
effects of evaluation use on specific program practice. Several theorists made strong
cases for understanding evaluation impact in an organization context.48 The findings
by Preskill (1994) showed signs of relationship between evaluation activities and the
development of organizational capacity. While evaluations are undertaken along the
lines of an organizations formal goals, the integration of findings into practice have to
45 Toby Diane Ayers, "Stakeholders as Partners in Evaluation: A Stakeholder-Collaborative Approach," Evaluation and Program Planning, no. 10 (1987). 46 M.Q. Patton, "Development Evaluation," Evaluation Practice 15, no. 3 (1994). 47 H. Preskill, "Evaluation's Role in Enhancing Organizational Learning," Evaluation and Program Planning 17, no. 3 (1994). 48 Shulha and Cousins, "Evaluation Use: Theory, Research and Practice since 1986."
46
fit into the numerous informal goals and structures within any organization, some of
which might have their own cultures and imperatives. Preskill cautions that an
essential element to successful linking evaluation and organizational learning is the
willingness of the organization’s management to support the collaborative process
and accept evaluative information.49
49 Ibid.
47
The 21st Century: Stretching the boundaries beyond use
Carol Weiss’s 1998 article50 “Have we learnt anything about the use of
evaluation?” sets the tone of the challenges currently facing evaluation use theorists.
She states that while there have been many achievements in the last three decades;
most of the learning has come from applying new constructs and perspectives than
from empirical research on evaluation use. She further argues that with the growing
realization of how complicated the phenomenon of use is and how different situations
and contexts can be from each other, it is conceptually and theoretically difficult to
reduce the elements of use to a set of quantitative factors. Mark and Henry (2004)
further corroborate Weiss’s observations by stating that the study of evaluation use
“is an overgrown thicket because very different positions have been advocated as to
the scope”. As a result of a myriad of theories and conflicting literature, they say that
even after three decades of research evaluators may not have a common
understanding of what it means for an evaluation to be used, or of what an evaluator
means when he/she refers to use.
As a response to such an overgrowth within the taxonomies of use, Kirkhart51
developed the integrated theory influence. She broadens the question from how are
the results of an evaluation study used to how and to what extent does an evaluation
shape, affect, support and change persons and systems. To answer this she proposes a
framework that shifts the focus from use to influence as the term use is limited to 50 Carol Weiss, "Have We Learned Anything New About the Use of Evaluation?," American Journal of Evaluation 19, no. 1 (1998). 51 K. E. Kirkhart, "Reconceptualizing Evaluation Use: An Integrated Theory of Influence," New Directions for Evaluation, no. 88 (2000).
48
results based measures and does not include unintended effects of evaluation and the
gradual emergence of impact over time. Evaluative influence is defined by Kirkhart
as “the capacity or power of persons or things to produce effects on others by
intangible or indirect means”.
In Kirkhart’s model (Figure 3.1) the source of influence can arise from either
Figure 3.1 - Kirkhart’s integrated theory of influence
SourceProcess Results
Inte
ntio
n
Uni
nten
ded
Inte
nded
Inter-mediate
End of cyc
le
Long term
Time
the evaluation process or the
evaluation results. She does
acknowledge that some of the
influence that comes from the
evaluation process will impact on
the results of the study; and thus
the two sources of influence are
interrelated.
The second dimension is the
intention of the influence and is defined as “the extent to which evaluation influence
is purposefully directed, consciously recognized and anticipated.” The final
dimension is the timing of the influence – immediate (during the study), end of cycle,
and long term. One of the key benefits Kirkhart proposes from this model is the
ability to distinguish between use and misuse – by tracking influences around an
evaluation study and the evolving patterns of influence over time she contends that
one can map the outcomes of use as beneficial or not.
49
Henry and Mark (2003)52 and Mark and Henry (2004)53 further advanced the
discussion of evaluation use, basing on Kirkhart’s theory of influence. They propose a
set of theoretical categories – “mediators and pathways” – through which evaluation
can exercise influence. Drawing from social science literature, they developed a
theory of change to apply to the consequences of evaluation at individual,
interpersonal and collective levels.
In Table 3.2 below Mark and Henry present the General Influence outcomes as the
“fundamental architecture of change”, where even though they may not yield any
change by themselves they are indirectly likely to set into motion some change in the
cognitive/affective, motivational or behavioral outcomes. For example, let us look at
the influence of elaboration - an individual, simple spending time thinking about an
evaluation finding, does not create any measurable use unless their thoughts lead to
attitude valence (positive or negative). Even though elaboration does not directly
deliver use it is an important immediate consequence of evaluation, without which
changes in behavior might not occur. Elaboration can be measured by assessing how
much time or effort an individual spends thinking in response to a message. An
evaluation report, a conversation about an evaluation, or a newspaper article about an
evaluation could trigger such cognitive processing. For example, a recently publicized
evaluation about the positive effects of primary feeding centers may cause a reader at
52 Melvin Mark and Gary Henry, "The Mechanisms and Outcomes of Evaluation Influence," Evaluation 10, no. 1 (2004). 53 Gary Henry and Melvin Mark, "Beyond Use: Understanding Evaluation's Influence on Attitudes and Actions," American Journal of Evaluation 24, no. 3 (2003).
50
another location to think more about her views on nutrition in refugee camps. Such a
change may be exactly what some evaluators consider enlightenment. Of course, an
evaluator would be interested not only in whether someone engaged in elaboration,
but also in what if any changes this led to in the person’s attitudes, motivations and
actions. Still, elaboration itself is an important immediate consequence of evaluation,
which might in turn produce a change in the individual’s opinion about nutrition
programs and, perhaps, subsequent change in behavior. General influence processes
can occur at all three levels, the individual, the interpersonal, and the collective, as
indicated in Table 5.0. Consideration of these influence processes is important for
understanding how evaluation can influence attitudes and actions. Cognitive and
affective outcomes refer to shifts in thoughts and feelings, such as a step towards
action as in agenda setting. Mark and Henry argue that although Motivational
outcomes, which refer to human responses to perceived rewards and punishments, has
received less attention in the literature it might be more important as an intermediate
tool to influence practitioner behavior towards increasing evaluation use rather than a
long term outcome. Behavioral outcomes refer to measurable changes in actions that
can be both short-term and long-term. For these would include changes in a teacher’s
instructional practices at the individual level or a government policy change at the
collective level. Thus, behavioral processes often comprise the long-term outcomes of
interest in a chain of influence processes.
Mark and Henry further attempt to tie in the traditional forms of use to the
above outcomes. Instances of instrumental use (where change occur in action) fall
51
within the behavioral row of Table 5.0. Conceptual use (where change occurs in
thoughts and feelings) corresponds to the cognitive and affective processes row.
Symbolic use (where the evaluation is used to justify a pre-existing position) ties into
a limited set – “justification” at the interpersonal level and “ritualism” at the
collective level. In contrast, process use does not correspond to specific rows of Table
3.2 as changes occur as a result of the process of evaluation rather than a result of an
evaluation finding.
Table 3.2 - A model of Outcomes of Evaluation Influence54
Type of outcome Influence
At an Individual
Level
At an Interpersonal
Level
At the Collective Level
General Influence Elaboration Justification Ritualism
Heuristics Persuasion Legislative hearings
Priming Change agent Coalition formation
Skills acquisition Minority-opinion
influence
Drafting legislation
Standard setting
Policy consideration
Cognitive and Salience Local descriptive Agenda setting
54 Mark and Henry, "The Mechanisms and Outcomes of Evaluation Influence."
52
affective norms
Opinion/attitude
valence
Policy-oriented learning
Motivational Personal goals
and aspirations
Injunctive norms Structural incentives
Social reward Market forces
Exchange
Behavioral New skill
performance
Collaborative change
in practice
Program continuation,
cessation or change
Individual change
in practice
Policy change
Diffusion
There is a brief review of an area that has emerged as an important focus within
evaluation theory – misuse. It is important to differentiation misuse from non-use.
Non-use is when there is a rational or unintended reason for ignoring an evaluation.
These can be due to the poor quality of the report, change in strategic direction etc.
However, misuse on the other hand, can occur if an evaluation is commissioned with
no intention of acting upon it or when there are deliberate attempt to subvert the
53
process and/or the findings. One of the first notable researchers on misuse, Alkin and
Coyle55, described several distinct variations.
(1) justified non-use: When the user is aware that the evaluation were technically
flawed or erroneous, he or she would be justified in not incorporating the
information into decision-making
(2) Unintentional non-use: When the evaluation was of sufficient technical
quality but potential users are unaware of their existence, or inadvertently fail
to process the information
(3) abuse: When the information is known to be of superior quality but is
suppressed or distorted by a potential user for whatever reason (political or
otherwise covert reasons)
Stevens and Dial56 (1994) outlined a list of practices that constitute misuse
such as changing evaluation conclusions, selectively reporting results, ascribing
findings to a study that differ from actual results, oversimplifying results and failing
to qualify results. However, as noted in Alkin57 (1990), as with evaluation use,
scholars continue to struggle with the complexity of misuse and the challenge in
establishing a standardized framework in which to gauge misuse. Below is Alkin’s
attempt to classify the causalities that lead to misuse.
55 Marvin Alkin and Coyle Karin, "Thoughts on Evaluation Utilization, Misutilization and Non-Utilization," Studies in Educational Evaluation 14, no. 3 (1988). 56 C. L. Stevens and M. Dial, eds., What Constitutes Misuse?, New Directions for Program Evaluation: Guiding Principles for Evaluators (San Francisco: Jossey-Bass,1994). 57 Marvin C. Alkin, Debates on Evaluation (Newbury Park, California: Sage Publications, 1990).
54
Figure 3.2 – Evaluation Use Relationships
Irrespective of how misuse or non-use is categorized the fact remains that precious
resources – effort, time and money – are wasted and opportunity costs incurred when
they occur. In reality, use, non-use and misuse can overlap in one evaluation strongly
influenced by the interests/motives of the users and the organizational context. It is
important to note that this research does not presuppose that all evaluation
recommendations are the best and therefore should be implemented. Program
evaluations share the complexity of the work they are evaluating. They are at best a
set of informed judgments made in specific contexts. As a result, the
55
recommendations of even the ‘best’ evaluation can be disputed or rejected on
perfectly rational grounds resulting in non-use.
The emergence of several evaluation utilization frameworks, over the last
decade, based on collective experience and findings from other fields have
strengthened the knowledge about the processes underlying evaluation utilization.
Weiss (1998) summarized the current state of evaluation use research as “we may not
have solved the problem but we are thinking about it in more interesting ways.”
56
Process Models of Evaluation Use
Theoretical process models, espoused by various evaluation use scholars,
attempts to integrate the factors that affect use into systems, showing the inter-
relationship among factors and their environment. What follows is a list of models
derived from the empirical research and from the theoretical literature.58
Implicit evaluation utilization process-models
These are models where individual factor influences are implied but not
directly depicted in the construct. The first theorist who has an implicit process model
is Campbell, who in the 1960s contended that the major responsibility for use of
evaluations lies in the political process, not with the evaluator59. He views the
evaluator as a scientist who conducts the evaluation using the best methods possible,
but does not directly promote the use of findings. His assumption was, similar to
other early theorists, that evaluations will be used when they are well done.
Figure 3.3 Campbell’s implicit process-model
Program evaluation reports of past programs
Consideration by policy-makers along
with other information
Instrumental Use
58 Burke R. Johnson, "Toward a Theoretical Model of Evaluation Utilization," Evaluation and Program Planning 21 (1998). 59 Campbell and Stanley, Experimental and Quasi-Experimetal Designs for Research.
57
Scriven’s model had a summative approach, in which the evaluator examines the
comparative strengths and weaknesses of a program and make a final judgment of
worth – is the program “good” or “bad”? Program decision-makers are viewed similar
to consumers of other products, in that based on the final judgment they make rational
choices60.
Figure 3.4 Scriven’s summative model
Organizational Environment
Final summative evaluation report
Marketplace of ideas and
information
Use by people interested in the
program
Weiss’s model of evaluation use focuses at the individual level61. She contends that
decisions are the result of three major influences: (1) information, (2) ideology and
(3) interests. The influence of these three factors is tempered by the organizational
environment in which the individual resides. Furthermore, decisions are guided by
two questions: does it conform to prior knowledge (“truth tests”)? And are the
recommendations feasible and action oriented (“utility tests)”?
60 M. S. Scriven, ed. Evaluation Ideologies, Evaluation Models: Viewpoints on Educational and Human Service Evaluation (Boston: Kluwer-Nijhoff,1983). 61 C. H. Weiss, ed. Ideology, Interest, and Information: The Basis of Policy Decisions, Ethics, the Social Sciences, and Policy Analysis (New York: Plenum,1993).
58
Figure 3.5 Weiss’s implicit decision model
Organization Environment
Interests Truth Tests
Decision to useIdeology
Utility Tests Information
Wholey based his model around instrumental use, stating that evaluation should
directly serve the needs of management and provide immediate, tangible use. He
argues that if the potential for use of an evaluation does not exist (which he would
determine from an “evaluation assessment”) then the evaluation should not be done.
Taking into account the resource limitations of programs, Wholey recommends a
process where evaluations are prioritized and designed to meet program budgets.
Figure 3.6 Wholey’s resource-dependent model
Assessment of evaluation needs
Evaluation implementation
Change in program
Continuous instrumental use
Cronbach talked about the need to understand in detail the process going on in a
program to effectively use its findings62. He suggests that there are often multiple
interactions among factors that can be captured only if the process is examined more
closely. Cronbach also suggests that when examining the process if changes are 62 L. J. Cronbach, Designing Evaluations of Educational and Social Programs (San Francisco: Jossey-Bass, 1982).
59
required, they need to be communicated to the stakeholders during the evaluation
rather than wait for a final report. So in this model, the evaluator is called upon to
carry out an educational role.
Figure 3.7 Cronbach’s process model
Analysis of background theoretical literature
Program Development
Continuous feedback and modification of
program and evaluation questions
Long term conceptual use
The final model is that of Rossi’s. He suggests that to increase use evaluators should
tailor evaluation activities to local needs63. How this is done depends on the stage and
the kind of the program that is being evaluated. This process of “fitting evaluations to
programs” can be viewed as an approach to increasing evaluation use.
Figure 3.8 Rossi’s process model
Review literature on
similar programs
Work with program
managers to develop model
Collect data
Compare model
with reality
Modify program
(instrumental and conceptual
use)
Explicit evaluation utilization process-models
Explicit process-models are those that are constructed by researchers and
directly tested on empirical data. A frequently cited explicit process-model of
63 Johnson, "Toward a Theoretical Model of Evaluation Utilization."
60
evaluation utilization was developed by Greene64. She suggested that stakeholder
participation in evaluation planning and implementation is an effective way to
promote use. Based on her findings, Greene categorized stakeholders into three
groups: (1) very involved, (2) somewhat involved and (3) marginally involved.
According to this participatory approach, stakeholders must be involved in the
formulation and interpretation phases of the evaluation.
Figure 3.9 Green’s participatory evaluation process
Iterative, ongoing communication and
dialogue with stakeholders
Active discussion of key program issues
amidst diverse perspectives
Learning more about the
program and agency Greater
understanding of results
Stakeholders’ substantive decision
making role
Affective individual learning of worth
and value
Learning more about
evaluation
Heightened perceptions of the results as valid,
credible, persuasive
Greater acceptance / ownership of
the results
Diversity of stakeholder participants
Voice to the less powerful interest
and attention from the most powerful
Greater sense of obligation to follow
through on the results
64 Greene, "Stakeholder Participation and Utilization in Program Evaluation."
61
Cousins and Leithwood developed an evaluation utilization model in 1986 and further
expanded it in 1993 to the “knowledge utilization” model65. This model lists
seventeen key factors (shown below) that affect use. All three of these sets of factors
are shown to directly affect utilization. Additionally, the first two sets are shown to
affect the third set of interactive processes.
Figure 3.10 Cousins and Leithwood utilization model
Characteristics of the source of information
Sophistication Quality Credibility Relevance Communication Quality Content Timelines
Interactive Processes
Involvement Social Processing Ongoing Contact Engagement Diffusion
Knowledge Utilization
Information Processing
n
Improvement Setting
Information Needs Focus for improvement Political climate Competing information User commitment User characteristics
65 Johnson, "Toward a Theoretical Model of Evaluation Utilization."
Decisio
Learning
62
Alkin, one of the earliest researchers in the evaluation utilization literature, developed
an evaluation-for-use model66. In this he includes a list of factors grouped into three
categories: human (evaluator and user characteristics), context (fiscal constraints,
organizational features, project characteristics) and evaluation (procedures, reporting)
factors. Alkin organizes what he sees as the most important of these factors in the
concept shown below.
Figure 3.11 Alkin’s factor model
Setting the stage Identifying /
organizing the participants
Operationalizing the interactive
process
Adding the finishing touches
Patton is the founder of “Utilization-Focused Evaluation”. In this approach, an
evaluator is supposed to consider potential use at67 every stage of the evaluation,
working closely with the primary intended users. Patton identifies organizational
decision makers as the primary users and information that is helpful in decision
making is factored into the evaluation design.
66 M. C. Alkin, A Guide for Evaluation Decision Makers (Newbury Park, CA: Sage, 1985). 67 Patton, Utilization Focused Evaluations.
63
Figure 3.12 Patton’s utilization-focused evaluation framework
Focus the evaluation on stakeholders’
questions, issues and intended uses
Collect Data
Disseminate findings for
indirect utilization
Involve users in the
interpretation of findings
Identify primary intended users and
stakeholders
Drawing from the various models Johnson (1998) concludes that evaluation
utilization is a continual and diffuse process that is interdependent with local
contextual, organizational and political dimensions. Participation by stakeholders is
essential and continual (multi-way) dissemination, communication and feedback of
information and results to evaluators and users (during and after a program) help
increase use by increasing evaluation relevance, program modification and
stakeholder ownership of results. Different models refer to the nature and role of the
organization, as an entity, to facilitate use. Focusing on how people operate in a
dynamic learning system, how they come to create and understand new ideas, how
they adapt to constantly changing situations and how new procedures and strategies
are incorporated into an organization’s culture. On reviewing the above literature, it
seems clear that evaluation use is a continual process that evolves and changes over
time. With each iteration new factors added to the spectrum of those influencing use.
Despite attempts to build a simplified framework for evaluation use it remains clear
that the utilization process is not a static, linear process – but one that is dynamic,
open and multi-dimensional.
64
Program Evaluation Systems in NGOs
Definitions
What is an NGO program?
Typically, NGOs identify several overall goals which must be reached to accomplish
their mission. Each of these goals often becomes a program. Nonprofit programs can
be viewed as processes or methods to provide certain services to their constituents.
What is program evaluation?
Program evaluation entails the use of scientific methods to measure the
implementation and outcomes of programs for decision-making purposes.68
For the purpose of this research program evaluation is defined as the
systematic study to assess the planning, implementation, and/or results of a program
with the aim of improving future work. A program evaluation can be carried out for a
variety of different reasons, such as for needs assessments, accreditation, cost/benefit
analysis, effectiveness, and efficiency. They can be formative or summative –
however in practice, it is normally carried out after program completion, usually by
external evaluators.
Program evaluation is sometimes interchangeably used, mistakenly, with other
measures like monitoring and impact assessment. Though all of these are used to
68 Rutman, Evaluation Research Methods: A Basic Guide.
65
observe a program’s performance they are distinct from each other. While monitoring
explains what is happening in a program, it is evaluation that attempts to explain why
these things are happening and what lessons can be drawn from them. On the other
hand, impact assessment tries to assess what has happened as a result of the program
and what may have happened without it.
Monitoring is the systematic collection and analysis of information as a project
progresses. It is aimed at improving the efficiency and effectiveness of a project. It
helps to keep the work on track, and can let management know when things are going
wrong to allow for course corrections. It also enables you to determine whether the
resources are being used efficiently and assess the capacity to complete the project
according to plan.
Evaluation is the comparison of actual project outcomes against the agreed plans. It
looks at what you set out to do, at what you have accomplished, and how you
accomplished it. It can be formative (taking place during the life of a project) or
summative (drawing learning from a completed project).
Impact assessment is used to assess the long term effects of the project. It is not just
the evaluation of process, outputs and outcomes of the project, but also their ultimate
effect on people’s lives. Impact assessments go beyond documenting change to assess
the effects of interventions on individual beneficiaries and their environment, relative
66
to what would have happened without them – thereby establishing the counterfactual.
It measures the any discernible change attributable to the project.
For example: A program that provides K-12 education to inner city children.
Monitoring will indicate if the program resources are being utilized efficiently and
effectively to the target population. Evaluation will indicate if the objectives of the
program were achieved – i.e.: education was provided to the target children. Impact
will assess if the strategy to provide education was successful. Did it enable the
children to then secure higher paying jobs? Were they able to break the cycle of
poverty? Etc.
Types of program evaluation
Program evaluation types differ in their primary objectives, their subjects, timing and
orientation. Listed below are the three most common types of evaluations in NGOs.69
Goals-based evaluations assess the extent to which programs are meeting
predetermined goals or objectives. Questions explored include:
o What is the status of the program's progress toward achieving the goals?
o Will the goals be achieved according to the timelines specified?
o Are there adequate resources (money, equipment, facilities, training, etc.)
to achieve the goals?
69 Carter McNamara, Field Guide to Nonprofit Program Design, Marketing and Evaluation (Minneapolis: Authenticity Consulting, 2003).
67
Process-Based evaluations assess the programs strengths and weaknesses.
Questions explored include:
o How does the program produce the results that it does?
o What are the levers that make the program successful? What impedes
progress?
o How are program-related decision made? What influences them and what
resulting actions are taken?
Outcomes-Based evaluations assess if the program is doing the “right” activities
to bring about desired outcomes for clients. Questions explored include:
o How closely is the program aligned with the organization’s mission?
o How does the program compare to similar activities that address the same
issue?
o How close were the achieved outcomes to the planned result?
o What are the indicators that need to be tracked to get a comprehensive
understanding of how the program has affected the clients?
68
Growth of the NGO Sector
Since the 1970s, a profound shift has taken place in the role of non-
governmental organizations (NGOs). In the wake of fiscal crisis, the Cold War,
privatization, and growing humanitarian demands, the scope and capacity of national
governments has declined. The NGO sector began to fill in the vacuum left by nation-
states in relief and development activities, both domestically and internationally.
While figures on NGO growth in the last three decades vary widely, most sources
agree that since 1970 the international humanitarian and development nonprofit sector
has grown substantially. Tables 2.1 and 2.2 illustrate this growth.70
The table below shows that within the United States alone, the number of
internationally active NGOs and their revenues grew much faster that the U.S. gross
domestic product.
Table 3.3 Changes in U.S. International NGO Sector, 1970-94 ($$ in U.S. Billions)
Year NGOs Revenues US GDP
1970 52 $0.614 $1,010.0
1994 419 $6.839 $6,379.4
Growth 8.05 11.3 6.3 Since 1970 times times times
70 Marc Lindenberg and Bryant Coralie, Going Global: Transforming Relief and Development Ngos (Kumarian Press, 2001).
69
The table below shows that similar trends are evident in the twenty-five OECD
Northern industrial countries.
Table 3.4 Growth in Revenue of Northern NGOs Involved in International Relief and Development Flow of funds from NGOs to Developing Countries by Source
($$ in U.S. Billions)
Year Private Public Total U.S. Share
1970 $800 $200 $1,000 50%
1997 $4,600 $2,600 $7,200 38%
Within the developing world, the number of local NGOs with a relief and
development focus has mushroomed. Although estimates of the size of the NGO
sector in any country are often unreliable, one source reports that in 1997 there were
more than 250,000 Southern NGOs.71 This growth has been facilitated by the retreat
of government provision in many developing countries – resulting in a reduced role in
welfare services – thereby widening the potential for non-state initiatives. Some
southern NGOs now reach very large numbers of constituents paralleling government
activities. Example: the Grameen Bank that has over 7 million borrowers.72 The 1993
Human Development Report judged that some 250 million people were being touched
by NGOs and likely to rise considerably in the 21st century.73
71 Alliance for a Global Community, "The Ngo Explosion," Communications 1, no. 7 (1997). 72 "The Grameen Bank," http://www.grameen-info.org/bank/GBdifferent.htm 73 United Nations Development Program UNDP, "Human Development Report," (New York: Oxford Press, 1993).
70
Table 3.5 - Statistic on the U.S. Nonprofit sector74
Overview of the U.S. Nonprofit Sector, 2004 - 2005
501(c)(3) public charities
Public charities 845,233
Reporting public charities 299,033
Revenues $1,050 billion
Assets $1,819 billion
501(c)(3) private foundations
Private foundations 103,880
Reporting private foundations 75,478
Revenues $61 billion
Assets $455 billion
Other nonprofit organizations
Nonprofits 464,595
Reporting nonprofits 112,471
Revenues $250 billion
Assets $692 billion
Giving
Annual, from private sources $260 billion
From individual and households $199 billion
As a % of annual income 1.9
Average, from households that
itemize deductions $3.58
Average, from households that
do not itemize deductions $551
Volunteering
Volunteers 65 million
74 "The Nonprofit Sector in Brief - Facts and Figures from the Nonprofit Almanac 2007," (2006), http://www.urban.org/UploadedPDF/311373_nonprofit_sector.pdf.
71
With this growth however have come several challenges for the NGO
community – both within and outside the organization. First, the new waves of
complex emergencies have overwhelmed global institutional-response capacity and
heightened risks to those the NGO assist and their own staff. Second, the declining
capacity of national governments has forced many agencies to taken on
responsibilities they are not trained or equipped to hold. Often agencies face a
dilemma of deciding whether to function as a substitute for state services or to
pressure the state to play a stronger role again. Third, as resources become tighter,
NGOs face new pressures for greater accountability for program impact and quality.
These pressures come from donors, private and public, who want to know if their
resources were used effectively. From NGO staff, who want to know if their
programs matter and from the beneficiaries, demands for greater participation in
program design and implementation.
As the demand for NGO services seems only likely to increase in the future there is
immense pressure on the NGO sector to engage in efforts to try and alleviate some of
these challenges. Interviews conducted by Hudson and Bielefeld75 and Fisher76 show
that one solution most NGO leaders believe in is that they should transform their
increasingly bureaucratic organizations into dynamic, live organizations with strong
“learning’ cultures. Lindenberg and Bryant (2001) based on their work with large
international NGOs conclude that they “must increasingly develop learning cultures
75 Bryant Hudson and Wolfgang Bielefeld, "Structures of Multinational Nonprofit Organizations," Nonprofit Management and Leadership 9, no. 1 (1997). 76 Julie Fisher, Nongovernments: Ngos and Political Development of the Third World (Connecticut: Kumarian Press, 1998).
72
in which evaluation is not thought of as cause for punishment but rather as a process
of partnership among all interested parties for organizational learning and
improvement”.
73
Current Use of Evaluations in NGOs
Current practice indicates that there is weak evaluation capacity in NGOs.77
Although most agencies have monitoring and evaluation (M&E) processes to assess
their programs, almost all of them are limited by budgetary constraints. Donors who
demand that NGOs become more “professional” show little willingness to pay for
increased professionalism, as it translates into increased overhead costs.78 Internally
as well NGOs face numerous problems with evaluation systems. For starters, it
requires organizational commitment of budget and staff to make it happen. Another
challenge is to figure out how to undertake evaluation of programs over time most
efficiently as well as effectively. Finally, NGOs are constantly challenged on when
and whether to share the findings from evaluations, and how to do so effectively.
There has also been a varying degree of evaluation practice in NGOs. For example,
compared with the application of evaluation in development programs, its application
to humanitarian action has been slower. According to ALNAP (2001)79 first
evaluations of humanitarian action weren’t undertaken until the second half of the
1980s. It was not until the early 1990s that evaluations took off. (Figure 1.3)
77 Michael Edwards and David Hulme, Beyond the Magic Bullet: Ngo Performance and Accountability in the Post-Cold War World (Connecticut: Kumarian Press, 1996). 78 Jonathan Fox and David Brown, The Struggle for Accountability (Cambridge, MA: MIT Press, 1998). 79 ALNAP, "Humanitarian Action: Learning from Evaluation," ALNAP Annual Review Series (London: Overseas Development Institute, 2001).
74
Figure 3.13 Evaluations filed in ALNAP Evaluative Reports Database80
By year of publication
0
5
10
15
20
25
30
35
40
1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
No.
of E
valu
atio
ns ('
000)
The boom undoubtedly represents a significant investment by the
humanitarian systems, and presents a considerable opportunity for critical reflection
and learning in humanitarian operations. Similarly, Riddell81 estimated that since the
1970s, some 12% of the US $420 million channeled in net aid has been subject to
evaluation. This number increased in the late 1990s to at least 20%82. Researchers
caution that despite the growing investment in evaluations, NGOs are lacking behind
in the effective use of findings83. So far the main focus in NGOs has been on
streamlining evaluation methods and design and establishing evaluation structures
within the organizations and among partners. Less evident are the utilization
perspectives, looking at evaluation findings as a learning tool and establishing
processes to identify and maximize this use. Carlsson et al relate the problem of
underutilization of evaluations to the perception of decision-making in NGOs. They 80 ALNAP’s Evaluative Reports Database (ERD) was setup in 1997 to facilitate access to evaluative reports of humanitarian action and improve inter-agency and collective learning. 81 R.C. Riddell, Foreign Aid Reconsidered (Baltimore: Johns Hopkins Press, 1987). 82 Carlsson, Kohlin, and Ekbom, The Political Economy of Evaluation: International Aid Agencies and the Effectiveness of Aid. 83 Ibid.
75
state that organizations are perceived to make decisions according to a rational
model: where they define problems, generate options, search for information and
alternatives and then, on the basis of the collected information, make a choice.
Evaluations in this model are expected to provide careful and unbiased data on
project performances. Through feedback loops, this process will improve learning and
thus lead to better decisions.
However, in reality, we see organization as a political system. Political considerations
enter the decision-making process in several ways. The “context” is political as the
programs that are evaluated are defined and funded through political processes. The
evaluation itself is political because it makes implicit political statements about issues
such as the legitimacy of program goals and usefulness of various implementation
strategies. Carlsson et al give an example where the political context affects
evaluations. They argue that donor agencies have an inherent pressure to give,
because they either commit themselves in advance to a certain amount either through
annual budget allocations (in case of government agencies) or through capital
subscriptions (from individual members). This pressure from agencies affects the
NGOs that receive funds in such a way that they no longer face financial penalties for
poor quality projects. All they need to show is that a program, that meets the donors’
objectives, is executed within budget. Alan Fowler84 concluded that an almost
universal weakness of NGOs is their limited capacity to learn, adapt and continuously
improve the quality of what they do. He urged NGOs to put in place systems which
84 Fowler, Striking a Balance: A Guide to Enhancing the Effectiveness of Non-Governmental Organizations in International Development.
76
ensure that they know and learn from what they are achieving and then apply what
they learn.
While much has been written about the shortcomings and the critical reviews
of NGO evaluations, the positive news is that there is a growing number of NGOs
committing to improve their organization structures and operations to facilitate
change. Recent books on NGO management are giving specific attention to assessing
performance (Fowler85; Letts86; Smillie and Hailey87) and the management of
information (Powell88). Lindenberg and Bryant89 list several accomplishments by
leading international NGOs, since 2000, to build their evaluation capacity and
systems. Some of these are –
- Oxfam GB produced a guide “Monitoring and Assessing Impacts”, that
reflects Oxfam’s internal change processes in conducting assessments.
- Save the Children UK published “Toolkits – A practical guide to assessment,
monitoring, review and evaluation”, a collection of tools for improving how
their staff and partners conducted M&E
- CARE USA developed their “Impact Guidelines”, a menu of impact
indicators for use in strengthening their programming goals
85 Ibid. 86 Christine Letts, High Performance Nonprofit Organizations: Managing Upstream for Greater Impact (New York: Wiley, 1999). 87 Ian Smillie and John Hailey, Managing for Change (London: Earthscan, 2001). 88 Mike Powell, Information Management for Development Organisations, 2nd ed., Oxfam Development Guidelines Series (Oxford: Oxfam, 2003). 89 Lindenberg and Coralie, Going Global: Transforming Relief and Development Ngos.
77
Additionally, networks like ALNAP have recommended the adoption of
evaluation standards, similar to the U.S. Program Evaluation Standards which are the
main set of standards in the wider evaluation field. On a smaller scale, NGOs have
produced their own guides on monitoring and evaluation.90 NGOs have utilized
advancements in technology to create centralized evaluation electronic libraries, inter
and intranet linkages and web-based discussion boards to effectively share findings
among stakeholders.91 These efforts have effectively bridged the communication gap
within agencies that operate globally. However, the communication style in many
large NGOs has tended to be either too “heavy” – that the information and learning
sink without trace or too “light” that they evaporate.92
An ALNAP study conducted a survey of its member agencies to assess current
practice of evaluation use and follow-up.93 It concluded that two types of factors play
a key role in the utilization of evaluation outcomes: (1) cultural, organizational and
managerial factors within agencies; and (2) factors related to the quality of
evaluations and the means of dissemination of results. The following grid captures
some of the responses as to what factors contribute to underutilization of evaluation
findings.
- Evaluation subject
90 Desai and Potter, The Companion to Development Studies. 91 Some web-based links are www.aidworkers.net , Monitoring and Evaluation News: www.mande.co.uk , www.alnap.org/discus, International NGO training and research center: www.intrac.org, DAC Evaluation Network, www.edlis.org 92 Bruce Britton, "The Learning Ngo," INTRAC Occasional Paper Series, no. 17 (1998). 93 Bert Van de Putte, "Follow-up to Evaluations of Humanitarian Programmes," (London: ALNAP, 2001).
78
o Security situations in complex emergencies precluding access.
o The essentially short term nature of many interventions of this nature.
o The fact that humanitarian emergencies tend to be context-specific and
that, as a result, not all lessons are replicable.
- Evaluation process
o Delays in the finalization of the evaluation made people lose interest,
key persons were transferred and new emergencies drew attention.
o Lack of ownership and a sense of control among the main stakeholders
o It is unclear when starting the evaluation what it is that needs to be
changed at the end and who is responsible for this.
o Quality of the evaluation, buy-in to evaluation process beforehand,
agreement with recommendations, perceived authority and
competence of the evaluators, recommendations too difficult to deal
with or not politically/institutionally acceptable, too many
recommendations, evaluation took too long to complete and
stakeholders have moved to other things.
- Follow-up process
o Lack of a "champion" who sees it through distribution, meetings,
"after actions" and other follow-up.
79
o Once a report is finalized, there is not enough discussion and
interaction with the staff concerned on how they intend to implement
some of the recommendations and overcome constraints.
- Organizational Characteristics
o Mix of factors including organizational priorities, resources, perceived
importance of the evaluation.
o Reluctant attitude of regional offices or units.
o Lack of time among the staff as well as staff capacity and knowledge.
o Turn over of staff.
o Organization staff resistant to change.
o Lack of understanding, appreciation of the role of evaluation in
improving the programming/ management of our humanitarian
operations.
The report recommended that NGOs make evaluation follow-up an integral
part of there operations and invest resources to build systems and process that
enhance use. Facilitators of utilization were linked to the presence of positive
structural and cultural characteristics that predispose organizational learning. In larger
organizations, the existence of a well-resourced evaluation unit was identified as an
important determinant of use. In such an environment there are dedicated resources to
ensure accountability and learning. There are clear decision-making structures,
mechanisms and lines of authority in place. Vertical and horizontal links between
80
managers, operational staff and policy-makers enable dissemination and sharing of
learning. There are permanent and opportunist mechanisms for facilitating
organization-wide involvement and learning. For smaller organizations, the report
called for a scaled back version of these characteristics but stressed their importance
nevertheless.
Similarly, a survey conducted by BOND – a network of over 280 UK-based
development NGOs – looked at the views of its members about the concept of
learning as well as whether and how it happens in the context of their day-to-day
work.94 Only 29% of NGOs stated that they regularly refer to lessons learnt during a
project. When asked what factors inhibit their ability to use past evaluation lessons,
most NGOs cited 'time pressure' as the most important factor; this was followed by
inadequate organizational capacity (resources and facilities), and lack of clarity about
what is available and relevant. On what factor aid’s utilization the most, 59% agreed
that participation by stakeholders during planning of the evaluation increased
ownership of findings and further utilization.
A study conducted by the Canadian Centre for Philanthropy95 (2003) found
that the NGOs they had systems in place for evaluation utilization used them in the
following manner: 68% for improvement of programs and 55% for strategic planning.
The survey found that findings were least likely to be used for fundraising purposes to
94 Jawed Ludin and Jacqueline Williams, Learning from Work: An Opportunity Missed or Taken? (London: BOND, 2003). 95 Michael H. Hall et al., "Assessing Performance: Evaluation Practices and Perspectives in Canada’s Voluntary Sector," ed. Norah McClintock (Toronto: Canadian Centre for Philanthropy, 2003).
81
information sharing within the sector. What triggered the higher use in the
respondents was a direct involvement in the evaluation process by senior
management and in some cases the Board.
The Swedish International Development Agency96 (SIDA) conducted a study
on evaluation use and concluded that in order for evaluation to be useful human
factors, e.g.: knowledge of stakeholders about evaluation has to be considered. They
also concluded that for effective use the evaluation process must allow for the
involvement and effective participation of management and staff. The importance of
the organizational context and organizational support structures (e.g.: impacts of
power inequalities, conflicting interests and differing views on the reality among
stakeholders) must be factored while planning for evaluation use.
Drawing from development NGOs’ literature and practice, Engel et al (2003)97
outline three different steps to increase internalization of program evaluation results.
1. Participatory monitoring and evaluation involving stakeholders
2. Emphasis on results-based planning and management among staff, and
3. Improved organizational learning
96 SIDA, "Are Evaluations Useful? Cases from Swedish Development Co-Operation.," SIDA Studies in Evaluation (Swedish International Development Agency, 1999). 97 P. Engel, C. Carlsson, and A. van Zee, "Making Evaluation Results Count: Internalizing Evidence by Learning," in ECDPM Policy Management Brief No. 16 (Maastricht: European Centre for Development Policy Management, 2003).
82
Engel et al also identify several donor agency initiatives to promote learning within
themselves and the agencies they support. Some of these are DFID’s Performance
Reporting Information System (PRISM) - a computer-based system to combine basic
project management information with qualitative information on the nature and
objectives of the program, and the World Bank’s Communities of practice – a
learning network centered on particular themes designed to establish trust and a
culture of sharing between staff. Another significant contribution by donor agencies
in promoting evaluation feedback and use was the “DAC98 Working Party on Aid
Evaluation’s” organized in Japan, 2000.This workshop highlighted the widespread
concern of DAC members about the current practices for disseminating lessons from
evaluations and the need for improved evaluation use to enhance aid policies and
programs.99
The RAPID (Research and Policy in Development) Framework developed by the
Overseas Development Institute, Britain’s leading think-tank on development issues,
identified four dimensions that influence use of evaluation and research.100
• The political context
• The evidence and communication
98 The Development Assistance Committee (DAC) is a specialized unit within the Organization for Economic cooperation and Development (OECD), whose members have agreed to secure an expansion of aggregate volume of resources made available to developing countries and to improve their effectiveness. To this end, members periodically review their amounts and nature of contributions to aid programmes, bilateral and multilateral, and consult each other on relevant aspects of their development assistance policies. 99 Organization for Economic co-operation and Development, "Evaluation Feedback for Effective Learning and Accountability," in Evaluation and Effectiveness, ed. Development Assistance Committee (Paris: OECD). 100 "Research and Policy in Development (Rapid)," Overseas Development Institute, http://www.odi.org.uk/RAPID/.
83
• The links among stakeholders
• The influence of the external environment
Figure 3.14 – the Research and Policy in Development Framework
Political Context
The framework views the evaluation process is in itself a political process, from the
initial agenda-setting exercise through to the final negotiations involved in
implementation of findings. Political contestation, institutional pressures and vested
interests matter greatly. So too, the attitudes and incentives among stakeholders,
program history, and power relations greatly influence use. Potential use to the
majority of staff in an organization may be discarded if those findings elicit
disapproval from the leadership. Political context includes: learning and knowledge-
84
management systems, structural proximity of evaluation units to decision-makers,
political structures and institutional pressures.
Evidence and Communication
Second, the framework identified the quality of the evaluation as essential for use.
Influence is affected by topical relevance and the operational usefulness of the
findings. The other key set of issues highlighted concern communication. The sources
and conveyors of information, the way findings are packaged and targeted can all
make a big difference in how the evaluation is perceived and utilized. The key
message is that communication is a very demanding process and it is best to take an
interactive approach. Continuous interaction with users leads to greater chances of
successful communication than a simple or linear approach. Quality includes: the
evaluation design, planning, approach, timing, dissemination and the quality and
credibility of the evidence.
Links
Third, the framework emphasizes the importance of links – among evaluators, users
and their links to influential stakeholder, relationships among stakeholders etc. Issues
of trust, legitimacy, openness and formal and informal partnerships are identified as
important. The interpersonal and conflict management skills needed to manage
defensiveness and opposition to findings are essential competencies in staff
conducting evaluations. Overall, there needs to be more attention paid to the
85
relational side of evaluation. This framework cautions that using evaluation is as
much a people issue as it is a technical one – and perhaps more so.
External Influences
Fourth, the framework includes the ways in which the external environment
influences users, uses and the evaluation process. Key issues here include the impact
of external politics and processes, as well as the impact of donor policies and funding.
Trends within the issue area and relationships with peer organizations or networks
also affect to the extent to which evaluations findings are used. It includes indirectly
involved stakeholders (not direct users) whose actions can affect the use (or non-use)
of an evaluation.
A recent, innovative tool developed within the NGO sector to track and
measure effective use is International Development Research Centre’s (IDRC)
Outcome Mapping (OM). It offers a methodology that can be used to create planning,
monitoring, and evaluation mechanisms enabling organizations to document, learn
from, and report on their achievements.101 OM is initiated through a participatory
workshop, involving program stakeholders, led by an internal or external facilitator
who is familiar with the methodology. Using a set of worksheets and questionnaires
the facilitator engages the participants to be specific about the clients it wants to
target, the changes it expects to see, and the strategies it employs to be more effective
in the results it achieves. The originality of the methodology is its shift away from
101
Sarah Earl, Fred Carden, and Terry Smutylo, Outcome Mapping: Building Learning and Reflection
into Development Programs (Ottawa: The International Development Research Center, 2001).
86
assessing the development impact of a program (defined as changes in state: for
example, policy relevance, poverty alleviation, or reduced conflict) and toward
changes in the behaviors, relationships, actions or activities of the people, groups and
organizations with which a program works directly. This shift significantly alters the
way a program understands its goals and assesses its performance and results. The
authors of this methodology claim it benefits those programs whose results and
achievements cannot be measured with quantitative indicators alone.
There are three components to OM:
(1) Intentional Design: helps a program establish a program’s vision and
operational guidelines (like who are it’s partners, how will the program
contribute to the overall mission of the organization)
(2) Outcome and Performance Monitoring: provides a framework for the ongoing
monitoring of the program’s actions toward the achievement of outcomes.
(3) Evaluation Planning: helps the program identify evaluation priorities and
develop an evaluation plan.
87
Figure 3.15 – Outcome Mapping Framework
INTENTIONAL DESIGN Vision; Mission; Outcome
challenges; Progress markers; strategy maps; Organizational practices
OUTCOME AND PERFORMANCE
MONITORINGMonitoring priorities; outcome journals; performance journals
EVALUATION PLANNINGEvaluation plan
The key innovation introduced by this approach, which relates to evaluation use, is
that in its evaluation planning component it takes a learning-based view of evaluation
guided by principles of participation and iterative learning. OM operates under the
premise that the purpose of an evaluation is to encourage program decision-making to
be based on data rather than on perceptions and assumptions. OM emphasizes
stakeholder participation at all stages of the evaluation and identifies certain key
factors that are likely to enhance utilization of evaluation findings. They are grouped
into two categories: organizational factors and factors related to the evaluation.
88
Table 3.6 – Outcome Mapping factors that enhance utilization
Organizational Factors Evaluation-Related Factors
Managerial support
Promotion of evaluation through a
learning culture
Participatory approach
Timely findings (completion matches
organization’s planning or review cycle)
High quality and relevant data
Findings that are consistent with the
organizational context
Skilled evaluator
Hatry and Lampkin102 suggest that NGOs use evaluation findings to make
informed management decisions about ways to allocate scarce resources and methods
and approaches to program delivery that will help the organization improve its
outcomes. NGOs must find significant value in evaluations to consider the trade-off
in staff, time and funding that is directed to program implementation for an
administrative report. This requires a mind-shift where NGOs view evaluation and
evaluation use as a necessary component to providing services to their beneficiaries –
and changing the organizational culture and including numerous stakeholders in the
process. The findings from evaluations must be transferred from a written report to
the agenda of managers and decision-makers.103 Getting NGOs to view evaluation as
102 Hatry and Lampkin, "An Agenda for Action: Outcome Management for Nonprofit Organizations." 103 Anthony Dibella, "The Research Manager's Role in Encouraging Evaluation Use," Evaluation Practice 11, no. 2 (1990).
89
a tool for learning instead of a mandate from a donor or an additional administrative
chore can be a challenge.
Many types of decision making models are used in NGOs. Understanding
these models allows staff to make intentional choices about which model might be
most appropriate for the various decisions that they confront. We will examine these
models for the purposes of decision-making around the use of evaluation findings.
The six models below describe how behavior can work to affect and manipulate the
decision-making process, sometimes in productive ways and at times in detrimental
ways for team decisions (Johnson and Johnson, 2000)104.
Method 1: Decision made by authority without group discussion
The designated leader makes all decisions without consulting group members.
Appropriate for simple, routine, administrative decisions; little time available to make
decision; team commitment required to implement the decision is low.
Strengths Weaknesses
Takes minimal time to make decision No group interaction
Commonly used in organizations (so we
are familiar with method)
Team may not understand decision or be
unable to implement decision
High on assertiveness scale Low on cooperation scale
104 D.W. Johnson and F.P. Johnson, Joining Together: Group Theory and Group Skills (Boston: Allyn and Bacon, 2000).
90
Method 2: Decision by expert
An expert is selected from the group. The expert considers the issues, and makes
decisions. Appropriate when result is highly dependent on specific expertise and team
commitment required to implement decision is low.
Strengths Weaknesses
Useful when one person on the team
has the overwhelming expertise
Unclear how to determine who the expert
is (team members may have different
opinions)
No group interaction
May become popularity issue or power
issue
Method 3: Decision by averaging individuals' opinions
Each team member provides his/her opinion and the results are averaged. Appropriate
when time available for decision is limited; team participation is required, but lengthy
interaction is undesirable; team commitment required to implement the decision is
low.
Strengths Weaknesses
Extreme opinions cancelled out No group interaction, team members are
not truly involved in the decision
Error typically cancelled out Opinions of least and most
knowledgeable members may cancel
Group members consulted Commitment to decision may not be
91
strong
Urgent decisions can be made Unresolved conflict may exist or escalate
May damage future team effectiveness
Method 4: Decision made by authority after group discussion
The team creates ideas and has discussions, but a designated leader makes the final
decision. Appropriate when available time allows team interaction but not agreement;
clear consensus on authority; team commitment required to implement decision is
moderately low.
Strengths Weaknesses
Team used more than methods 1–3 Team is not part of decision
Listening to the team increases the
accuracy of the decision
Team may compete for the leader’s
attention
Team members may tell leader “what
he/she wants to hear”
Still may not have commitment from the
team to the decision
Method 5: Decision by majority vote
Discussion occurs until 51% or more of the team members make the decision.
Appropriate when time constraints require decision; group consensus supporting
voting process; team commitment required to implement decision is moderately high.
92
Strengths Weaknesses
Useful when there is insufficient time
to make decision by consensus
Taken for granted as the natural, or only,
way for teams to make a decision
Useful when the complete team-
member commitment is unnecessary
for implementing a decision
Team is viewed as the “winners and the
losers”; reduces the quality of decision
Minority opinion not discussed and may
not be valued
May have unresolved and unaddressed
conflict
Full group interaction is not obtained
Method 6: Decision by consensus
Collective decision arrived at through an effective and fair communication process
(all team members spoke and listened, and all were valued). Appropriate when time
available allows a consensus to be reached; the team is sufficiently skilled to reach a
consensus; the team commitment required to implement the decision is high and all
team members are good communicators.
Strengths Weaknesses
Most effective method of team
decision making
Takes more time than methods 1–5
All team members express their
thoughts and feelings
Takes psychological energy and high
degree of team-member skill (can be
93
negative if individual team members not
committed to the process)
Team members “feel understood”
Active listening used
94
Barriers to Evaluation Use in NGOs
Beyond the limitations of qualitative and quantitative data, evaluation use
faces challenges from political, social and organization forces105. For example,
disagreement amongst staff about priority issues, conflicts around resources, staff
turnover, inflexible organizational procedures and changes in external conditions
(donors, issue area). Shadish, Cook, and Leviton106 grouped the obstacles into the
following categories:
(1) findings can threaten one’s self interests
(2) fear that the program will get eliminated
(3) program staff are not motivated by seeking efficacy
(4) the slow and incremental nature of change
(5) stakeholders often have limited influence on policies and programs
Evaluations findings often include mixed objectives and multiple stakeholders,
without prioritizing or considering what this may mean in terms of approach.
Multiple purposes may unintentionally undermine another where use by one set of
stakeholders may counter the intended learning use for others. For example, from the
point of view of those whose work is being evaluated, the knowledge that judgments
105 Weiss, "Have We Learned Anything New About the Use of Evaluation?." 106 Shadish, Cook, and Leviton, Foundations of Program Evaluation: Theories of Practice.
95
will be made and communicated in writing can create defensiveness. What is often
lacking is clarity and agreement about the purpose of the evaluation107.
NGOs are also challenged by the lack of robust mechanisms that recall and make
available findings from past evaluations to decision-makers. The process of storing
and recalling knowledge is complex, and its translation to action is a highly individual
and personal process which can be difficult to track. Often information tends to be
supply drive, with evaluations pumping out findings on the assumption that it will be
automatically picked up. The challenge is to successfully point staff to relevant
evaluation lessons as and when they need the information.
Other impediments described in the literature include an organizational culture that
does not value learning, staff members who do not understand evaluation,
bureaucratic imperatives such as the pressure to spend regardless of quality, and the
lack of real incentives to change. The unequal nature of the aid relationship is also a
significant barrier. Why and by whom an evaluation is commissioned affects
ownership and hence use. For example: evaluations viewed in the field as serving
only headquarters needs, not the needs of the program. Performance issues can also
inhibit use. Just as utilization is enhanced by motivated individuals willing to
‘champion’ the findings and promote use, it is also constrained by individuals who
block or fail to act. Some organizations have a culture where accountability tends to
be associated with blame. Evaluation reports can present a risk to an organization’s
107 Kevin Williams, Bastiaan de Laat, and Elliot Stern, "The Use of Evaluation in the European Commission Services - Final Report," (Paris: Technopolis France, 2002).
96
reputation. The perceived risk may lead staff members to suppress and reject findings
in the interests of protecting their survival.108
Beyond these practical problems of using evaluation findings, there are a few
philosophical challenges as well. Decision making in organizations is never linear,
and is often determined by a group of decision-makers. While evaluation findings can
change perceptions it is unlikely to bring all parties to agree on which facts are
relevant or even on what the facts are. Another problem for NGOs is their motivation
for conducting the evaluation may be different from that of the funder. Carson109
notes that if a major motivation is to direct funding to a project with proven results,
there is little evidence that this happens with any frequency. A continuing source of
tension between the donors and NGOs is that there is seldom an agreement
beforehand about what benchmarks are important to measure and how the results will
be used. Vic Murray110 says that regular and systematic use of evaluation findings is
still relatively uncommon. This is partly because utilization efforts do not produce the
value for the money, and are quickly abandoned.
The lack of adequate planning to time evaluations to inform key decision dates such
as funding cycles and annual program planning is also identified as a barrier. A study
of Doctors without Borders’ use of evaluations indicates that evaluations were not
108 Barb Wigley, "The State of Unhcr's Organization Culture: What Now?," http://www.unhcr.org/publ/RESEARCH/43eb6a862.pdf 109 Emmet D. Carson, "Foundations and Outcome Evaluation," Nonprofit and Voluntary Sector Quarterly 29, no. 3 (2000). 110 Murray, "The State of Evaluation Tools and Systems for Nonprofit Organiations."
97
used because they took place too late: the decisions they should have influenced had
already been made111.
Rosenbaum112 acknowledges that there are costs associated with evaluation
follow-up and use; but suggests that NGOs view these costs as opportunity costs. She
offers a few suggestions on how these costs can be managed: (a) NGOs should
allocate a percentage of their general operating budget for learning and evaluation use
that fits within the organization’s strategic plan; (b) build the evaluation follow-up
costs into the program’s budget as a fixed cost line item. Brett, Hill-Mead and Wu’s
(2000)113 examination of evaluation use of NGOs demonstrated the complexities and
challenges they face. While they mirrored the resources constraints mentioned above
some organizations with global operations also struggled with managing information
among its various locations. Not having dedicated staff for evaluation and use
hampered organizations attempts to incorporate learning into planning. The authors
suggest that establishing a culture of evaluation use must be a gradual process that
allows for staff to find uses for data in their daily work and simple enough to allow
them to embrace the process that led them to the data. Andrew Mott114 suggests that
strengthening the internal learning capacity of NGOs must be a critical priority of
donors. A strong, increasingly knowledgeable and effective organization can
111 Putte, "Follow-up to Evaluations of Humanitarian Programmes." 112 Nancy Rosenbaum, "An Evaluation Myth: Evaluation Is Too Expensive," National Foundation for Teaching Entrepreneurship (NFTE), http://www.supportctr.org/images/evaluation_myth.pdf. 113 Belle Brett, Lynnae Hill-Mead, and Stephanie Wu, "Perspectives on Evaluation Use and Demand by Users: The Case of City Year," New Directions for Program Evaluation, no. 88 (2000). 114 Andrew Mott, "Evaluation: The Good News for Funders," (Washington, DC: Neighborhood Funders Group, 2003).
98
maximize grantee funding and lead to desired impact. He recommends that funders
incorporate a utilization and learning component into evaluations.
Barriers to evaluation use can be summarized into the following categories:
Political
Political activity is inextricably linked to effective use. Programs are results of
political decisions, so evaluations implicitly judge those decisions. Also, evaluations
feed decision making and compete with other perspectives within the organization
(Green, 1990; King, 1988; Weiss, 1997; Mowbray, 1992; Carlsson et al, 1994).
NGOs practice shows that political considerations enter the evaluation process from
start to finish -- from what gets evaluated to how data gets interpreted. Findings from
any evaluation are only partly logical and deductive; it relies equally on perspectives
and interests of stakeholders. Organizations face a challenge in navigating political
interests to promote use because it is likely to result in actions and decisions that shift
power, status and resources.
Procedural
Throughout the lifecycle of an evaluation NGOs face challenges to use. The
lack of resources, time and staff capacity and knowledge to conduct evaluations and
follow-up emerged repeatedly as a barrier to use. This constraint was reported both at
the organizational and program level – where intended users and intended uses were
not identified during evaluation planning. On completion of evaluations, NGOs were
challenged to get the right information to the right people who are open to and know
how to use findings. Impeding factors were the timing of the evaluation, levels of
99
bureaucracy within an organization, the lines of communication within and across
these levels and the degree of decision-making autonomy within program units. Poor
quality of reports also surfaced as affecting use as stakeholders were either unclear on
how to transfer findings to instrumental use or did not see the information as credible.
Social
The enthusiasm and engagement of staff is critical to the success of evaluation
utilization. Research highlighted barriers to staff engagement range from lack of
ownership of the process; resistance to change; low motivation to seek efficacy and
excessive control by a few stakeholders. When potential user involvement was driven
by symbolic use to meet donor requirements or management directive it resulted in
minimal use. In larger organizations with multiple programs and competing agendas,
reluctance of teams to partner in evaluations resulted in limited to no conceptual use.
Personal resistance to use can be attributed to situations when findings could threaten
individual self interests. Finally, in some organizations there is a lack of incentive to
use and learn – this is particularly the case when there is rotation for staff and no
longer motivated to observe the consequences of their decisions.
Organizational
Absent or inflexible systems and structures were identified as barriers to use.
NGOs lack the infrastructure to effectively disseminate and retrieval evaluation
results to inform in-time decisions. Information was often stored locally and in
inaccessible formats that inhibit sharing – resulting in a poor understanding of what is
available and relevant. Even when information was available and shared,
organizational decision-making models limit potential users from using the findings.
100
NGOs were unable to engage in conceptual and strategic use in environments that did
not provide an overarching framework on evaluation use and organization learning.
Staff remained focused on individual program evaluations but missed the larger
utilization opportunities. Finally, staff turnover emerged as a major barrier –
especially among primary intended users as utilization processes are dependent on
their active engagement throughout the evaluation cycle. New users who join the
process midstream seldom come with the same interests and agenda as those
originally involved. Additionally, this leads to a loss of institutional memory.
All of the studies on evaluations indicate that NGOs have become much more
aware of the need for evaluation, within their operations, and have moved a step
closer to using evaluation as a mechanism to develop a wider perspective on NGO
effectiveness, looking beyond individual projects, across sectors and country
programs.
“If evaluation is to continue to receive its current levels of
attention and resources in NGOs, and be embraced by all –
whether at policy or operational level – it needs to demonstrate
clearly its contribution to improved performance.”
- ALNAP 2001
101
Organizational Learning
Definitions
Table 3.7 – Organizational Learning Definitions
Author(s) Definition of Organizational Learning
Chris Argyris and
Donald Schön115
OL occurs when members of the organization act as
learning agents for the organization, responding to changes
in the internal and external environments of the organization
by detecting and correcting errors in the organizational
theory-in-use, and embedding the results of their inquiry in
private images and shared maps of the organization.
Marlene Fiol and
Marjorie Lyles116
OL refers to the process of improving actions through the
development and interpretation of the environment, through
which cognitive systems and memories results. Observable
organizational actions are a key criterion for learning.
George P. Huber117 OL is a consequence of discussion and shared
interpretations, changing assumptions and trial and error
activities. Increasing the range of potential organizational
behaviors is both necessary and sufficient as the minimal
115 Chris Argyris and Donald Schön, Organizational Learning: A Theory of Action Perspectives (Reading, MA: Addison-Wesley, 1978). 116 C. M. Fiol and M. A. Lyles, "Organizational Learning," The Academy of Management Review 10, no. 4 (1985).
102
condition for learning.
Peter Senge118
OL is where people continually expand their capacity to
create the results they truly desire, where new expansive
patterns of thinking are nurtured, where collective aspiration
is set free and where people are continually learning how to
learn together.
For the purposes of this research the definition of OL is summarized as:
learning which serves a collective purpose, is developed through experience and
reflection, is shared by a significant number of organizational members, stored
through institutional memory; and is used to modify organizational practices.
117 George P. Huber, "Organizational Learning: The Contributing Processes and the Literatures," Organization Science 2, no. 1 (1991). 118 Peter Senge, The Fifth Discipline: The Art and Practice of the Learning Organization (New York: Doubleday, 1990).
103
Types of Learning
One of Argyris and Schon’s most influential idea, theories of action are the
routines and practices that embody knowledge.119 They are theories about the link
between actions and outcomes, and they include strategies for action, values that
determine the choice among strategies, and the assumptions upon which strategies are
based. The practices of every organization reflect the organization’s answers to a set
of questions; in other words, a set of theories of action. For example, a relief agency
embodies in its practices particular answers to questions of how to access and assist
vulnerable populations. The particular set of both questions and answers (e.g., to
assist populations by providing supplementary feeding centers) are the agency’s
theories in action. Once theories of action are established, the process of learning
involves changes in these theories either by refining them (single-loop learning) or by
questioning underlying assumptions, norms, or strategies so that new theories-in-use
emerge (double-loop learning).
Single-loop learning occurs within the prevailing organizational frames of reference.
It is concerned primarily with effectiveness— how best to achieve existing goals and
objectives.120 Single-loop learning is usually related to the routine, immediate task.
According to Dodgson (1993), single-loop learning can be equated to activities that
add to the knowledge-base or organizational routines without altering the
fundamental nature of the activities. This is often referred to as “Lower-level
119 Chris Argyris and Donald Schön, Organizational Learning Ii: Theory, Method and Practice (Reading, MA: Addison-Wesley, 1996). 120 Argyris and Schön, Organizational Learning: A Theory of Action Perspectives.
104
Learning” (Fiol and Lyles 1985); “Adaptive Learning” (Senge 1990) and “Non
Strategic Learning” (Mason 1993).
Double-loop learning changes organizational frames of reference. This occurs when,
in addition to detection and correction of errors, the organization questions and
modifies its existing norms, procedures, policies and objectives. Double-loop learning
is related to the non-routine, the long-range outcome. This type of learning is
considered to be non-incremental because the organizational response will occur with
a newly formulated “mental map” (Levitt and March, 1988; Senge 1994). The
resulting learning reflects in fundamental ways change in the culture of the
organization itself (Simon, 1991). Double-loop learning is also called “Higher-Level
Learning” (Fiol and Lyles 1984); “Generative Learning” (Senge 1990) and “Strategic
Learning” (Mason 1993).
Deutero-Learning was identified when organizations carry out both single and
double-loop learning. This is considered by theories to be the most important level, as
it is the organization’s ability to learn how to learn. This awareness makes the
organization create then appropriate environment and processes for learning121.
121 E. C. Nevis, A. J. DiBella, and J. M. Gould, "Understanding Organizations as Learning Systems," Sloam Management Review 36, no. 2 (1995).
105
Levels of Learning
Several authors noted that learning can occur at three levels: individual, group and
organizational.
Individual Level
Watkins et al122 described individual learning as a natural process in which
individuals discover discrepancies in their environment, select strategies based on
cognitive and affective understanding of these discrepancies, implement these
strategies and evaluate their effectiveness, and eventually begin the cycle again.
Argyris and Schön123 commented that individual learning is a necessary but
insufficient condition for organization learning. Senge124 argued that organizations
learn only through individuals who learn. Individual learning does not guarantee
organizational learning, but without it no organizational learning occurs.
Group Level
Senge noted that group learning is vital because they, not individuals, are the
fundamental learning unit in organizations. This is where “the rubber meets the road”
– unless groups can learn, the organization cannot learn. Argyris and Schön125 noted
that group learning occurs when team members take part in dialogue and exchange of
122 K. Watkins, V. Marsick, and J. Johnson, eds., Making Learning Count! Diagnosing the Learning Culture in Organizations (Newbury Park, CA: Sage,2003). 123 Argyris and Schön, Organizational Learning: A Theory of Action Perspectives. 124 Senge, The Fifth Discipline: The Art and Practice of the Learning Organization. 125 Argyris and Schön, Organizational Learning Ii: Theory, Method and Practice.
106
ideas and information. This allows underlying assumptions and beliefs to be revealed
and, thereby allows for the creation and sharing of knowledge.
Organizational level
Organizational level learning is not merely the sum of individual learning126.
Learning at the individual level may not result in OL unless the newly created
knowledge is shared and communicated among individuals who constitute an
organization-level interpretation and learning system127. Organizations develop
mechanisms - such as policies, strategies and explicit models to capture and retain
knowledge, despite the turnover of staff128.
126 Fiol and Lyles, "Organizational Learning." 127 R. L. Daft and K. E. Weick, "Toward a Model of Organizations as Interpretation Systems," The Academy of Management Review 9, no. 2 (1984). 128 B. S. Levitt and J. G. March, eds., Organizational Learning, Organizational Learning (Thousand Oaks, CA: Sage,1996).
107
Leading Theorists
During the past 30 years, and especially during the past decade, organizational
learning has emerged as a “fundamental concept in organizational theory” (Arthur &
Aiman-Smith, 2002, p. 738). By the early 21st century, “the learning organization”
and the concept of “organizational learning” had become indispensable core ideas for
managers, consultants and researchers. With its popularity and the proliferation of
literature on the subject, organizational learning has a multitude of constructs and
principles that define it. For the purposes of this research, the focus will remain on the
key thought-leaders who have contributed to the advancement of the field and
examination of concepts that relate to evaluation utilization. Despite the explosive
growth in publications on organizational learning the literature has been plagued by
widely varying theoretical and operational definitions and a lack of empirical study.
(Lant, 2000, p. 622) A major factor of this fragmentation is that organizational
learning has acted as a kind of conceptual magnet, attracting scholars from many
different disciplines to focus on the same phenomenon (Berthoin-Antal, Dierkes, et
al., 2001). The learning metaphor has offered fertile ground in which each discipline
could stake its claim, generating its own terminology, assumptions, concepts,
methods, and research. For example, the Handbook of Organizational Learning and
Knowledge (Dierkes, Berthoin-Antal, Child,&Nonaka, 2001) included separate
chapters for each of the following disciplinary perspectives on organizational
learning: psychology, sociology, management science, economics, anthropology,
political science, and history.
108
In 1978, Argyris and Schön wrote what is now considered by many to be the
first serious exploration of organizational learning. Their seminal book,
Organizational Learning, provided a foundation for the field and defined the explicit
or implicit approaches taken by different social science disciplines to learning and to
organization structures. Over the years, the more organizational learning and related
phenomena have been observed and studied, the more conceptually complex and
ambiguous they have become (e.g., Argyris, 1980; Barnett, 2001; Castillo, 2002;
Ortenblad, 2002). Recognizing that only individuals can act as agents of learning,
Argyris and Schön (1978) suggested that organizational learning occurs when
individual members “reflect on behalf of the organization.” Individual learning is
guided by “theories of action”—complex system of goals, norms, action strategies,
and assumptions governing task performance (Argyris & Schön, 1978, pp. 14-15).
Theories of action are not directly observable but can be inferred from what people
say and do. To account for organizational learning, Argyris and Schön129 simply
extended the concept of individual level to organizational-level theories of action.
Organizational learning may be said to occur when the results of inquiry on behalf of
the organization are embedded explicit organizational, so-called maps (e.g., rules,
strategies, structures). For learning to become organizational, there must be roles,
functions, and procedures that enable organizational members to systematically
collect, analyze, store, disseminate, and use information relevant to their own and
other members’ performance.
129 Argyris and Schön, Organizational Learning: A Theory of Action Perspectives.
109
March and Olsen130 asked what organizations could actually learn in the face
of barriers such as superstitious learning and the ambiguity of history. Argyris and
Schön also focused on the limits to learning but argued that these limits could be
overcome if people or organizations replace “Model I” – single loop learning - with
“Model II” – double loop learning. Their approach implied a fundamental change in
thinking and behavior that could be created only through new kinds of consulting,
teaching, and research131. More than a decade later, Huber’s evaluation of the
literature still focused on the “obstacles to organizational learning from experience”
and evaluations132.
Without a doubt, organizational learning received its greatest thrust from Senge’s The
Fifth Discipline (1990). Senge’s book synthesized a number of innovative streams of
social science (e.g., action science, system dynamics, dialogue) into a vision of the
learning organization: Where people continually expand their capacity to create the
results that they truly desire, where new and expansive patterns of thinking are
nurtured, where collective aspiration is set free, and where people are continually
learning how to learn together(p. 3). The field of organizational learning has injected
a rich new terminology into the language of researchers and practitioners alike. The
new terminology including concepts such as double-loop learning, systems thinking,
mental models, organizational memory, competency traps, dialogue, tacit knowledge,
reflection, defensive routines, absorptive capacity, and knowledge creation. Once
130 J. G. March and J. P. Olsen, Ambiguity and Choice in Organizations (Bergen: Universitetsforlaget, 1976). 131 Chris Argyris, Robert Putnam, and Diane McLain Smith, Action Science: Concepts, Methods and Skills for Research and Intervention (San Francisco: Jossey-Bass, 1985). 132 Huber, "Organizational Learning: The Contributing Processes and the Literatures."
110
again, given the widespread adoption of OL – these terms have come into wide usage
without necessarily conveying consistent meanings.
An important turning point in the literature on organizational learning
occurred when Senge reframed organizational learning as the “art and practice of the
learning organization133.” Senge writes that learning organizations embody five major
“disciplines”. By incorporating these disciplines, organizations can transform
themselves into learning organizations, able to overcome obstacles and thrive in
today’s and tomorrow’s markets. Senge’s first discipline is “systems thinking” –
involves being able to see the big picture and understanding the interconnectedness of
the people, functions and goals of the organizations. The second is “personal
mastery” – the idea that the individuals within the organization can help it by first
becoming clear about their own personal visions, and then focus on helping the
organization succeed. The third is “mental models” – the difference between two
individuals’ understanding of reality. Recognizing that people see the world through
their own mental models followed by an attempt to build shared models within the
organization practices this discipline. The fourth discipline “shared vision” builds on
the shared mental models theme. Involving members of an organization to contribute
and be a part of developing the vision will lead to its success. The final discipline is
“team learning’ – where individual mastery is shared for the collective learning of the
organization. Learning for the organization as a whole is greater than the sum of
individual learning of its staff134.
133 Senge, The Fifth Discipline: The Art and Practice of the Learning Organization. 134 Fiol and Lyles, "Organizational Learning."
111
Individual learning and organizational learning are similar in that they involve
the same phases of information processing: collection, analysis and retention. They
are dissimilar in two respects: information processing is carried out at different
system levels by different structures, and organizational learning involves an
additional phase of dissemination135. One framework that attempts to relate individual
and organization level learning is an “Organizational Learning Mechanism” (OLM).
OLMs are institutionalized structural and procedural arrangements that allow
organizations to collect, analyze, store, disseminate and use systematically
information that is relevant136. OLMs link learning in organizations to learning by
organizations in a concrete, directly observable fashion – they are organizational-
level processes that are operated by individuals. The most frequently discussed OLM
in the literature is the post project review; which examines the role of evaluations to
inform learning.
The field of organizational learning presents both a challenge and an opportunity,
demanding creative research designs conducted by multidisciplinary teams that take
into account multiple views of reality137. Multidisciplinary approaches are easy to
espouse but difficult to actually produce. The existence of interdisciplinary teams
does not necessarily enable social scientists to overcome deeply entrenched
135 P. M. Senge et al., The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations (New York: Currency/Doubleday, 1999). 136 M. Popper and R. Liptshitz, "Organizational Learning Mechanisms: A Cultural and Structural Approach to Organizational Learning," Journal of Applied Behavioral Science 34 (1998). 137 Ariane Berthoin-Antal et al., Handbook of Organizational Learning and Knowledge (Oxford University Press, 2001).
112
paradigmatic differences. As Berthoin-Antal et al. pointed out, “researchers them
selves need to learn how to learn better . . . they need to apply some of the lessons
from the study of organizational learning to their own research practice” (p. 936). In
considering the different views of OL highlighted above, several important points of
agreement emerged among the different perspectives. There is considerable
agreement among the above-mentioned theorists that OL:
- Involves multilevel learning: OL needs to consider the individual, group and
organization levels of knowledge. Sharing ideas, insights and innovations within
these levels is a key component of learning. 138
- Requires inquiry: Inquiry is a necessary and sufficient condition for OL -
Whether inquiry is formal or informal, the cyclical process of questioning, data
collection, reflection, and action may lead to generating alternative solutions to
problems.139
- Results in shared understandings: OL involves shared understanding that
integrates lessons about the relationship between actions and outcomes that
underlie organizational practices.140
138 P. Shrivastava, "A Typology of Organizational Learning Systems," Journal of Management Studies 20, no. 1 (1983). 139 J. Dewey, How We Think: A Restatement of the Relation of Reflective Thinking to Educative Process (Lexington, MA: D.C. Heath, 1960). 140 Argyris and Schön, Organizational Learning: A Theory of Action Perspectives.
113
Main Constructs
Huber (1991) frames OL through the following constructs:
1. Knowledge acquisition: the process by which knowledge is obtained either
directly or indirectly.
2. Information Distribution: the process by which an organization shares information
among its members.
3. Information interpretation: the process by which distributed information is given
one or more commonly understood interpretations.
4. Organizational memory: the means by which knowledge is stored for future use.
Knowledge Acquisition
Organizations engage in many activities that acquire information. These can
be formal activities (like evaluations, research and development and market analysis)
or informal activities (like reading articles; conversations). These activities can
further be grouped into two distinct learning processes that guide them:
1. Trial-and-error experimentation. According to Argyris and Schön learning occurs
when there is a discrepancy between what is expected to occur and what the
actual outcome is. This “error detection” is considered to be a triggering event for
learning.
114
2. Organizational search. An organization draws from a pool of alternative routines,
adopting better ones when they are discovered. Since the rate of discovery is a
function both of the richness of the pool and of the intensity and direction of
search, it depends on the history of success and failure of the organization.
In simple discussions of experiential learning based on trial-and-error learning
or organizational search, organizations are described as gradually adopting those
routines, procedures, or strategies that lead to favorable outcomes; each routine is
itself a collection of routines, and learning takes place at several nested levels. In such
multilevel learning, organizations learn simultaneously both to discriminate among
routines and to refine the routines by learning within them.
A familiar contemporary example is the way in which organizations learn to use some
software systems rather than others and simultaneously learn to refine their skills on
the systems that they use. As a result of such learning, efficiency with any particular
procedure increases with use, and differences in success with different procedures
reflect not only differences in the performance potentials of the procedures but also
an organization’s current competences with them. Multilevel learning typically leads
to specialization. By improving competencies within frequently used procedures, it
increases the frequency with which those procedures result in successful outcomes
and thereby increases their use. Provided this process leads the organization both to
improve the efficiency and to increase the use of the procedure with the highest
115
potential, specialization is advantageous. However, a competency trap can occur
when favorable performance with an inferior procedure leads an organization to
accumulate more experience with it, thus keeping experience with a superior
procedure inadequate to make it rewarding to use.
Information Distribution
Information distribution is a determinant of both the occurrence and breadth
of organizational learning. Organizations often do not know what they know. Except
for their systems that routinely index and store "hard" information, organizations tend
to have only weak systems for finding where a certain item of information is known
to the organization. But when information is widely distributed in an organization, so
that more and more varied sources for it exist, retrieval efforts are more likely to
succeed and individuals and units are more likely to be able to learn141. Thus,
information distribution leads to more broadly based organizational learning.
Program groups with potentially synergistic information are often not aware of where
such information could serve, and so do not route it to these destinations. Similarly,
senior management who could use information synergistically often does not know of
its existence or whereabouts. Linking those who possess information to those who
need this information is what promotes organization-wide learning.
141 K. J. Krone, F. M. Jablin, and L. L. Putnam, eds., Communication Theory and Organizational Communication: Multiple Perspectives, Handbook of Organizational Communication (Newbury Park, CA: Sage,1987).
116
Combining information from different programs leads not only to new information
but also to new understanding. This highlights the role of information distribution as a
precursor to aspects of organizational learning that involves information
interpretation. In addition to traditional forms of information distribution such as
telephone, facsimile, face-to-face meetings, and memorandums, computer-mediated
communication systems such as electronic mail, bulletin boards, computerized
conferencing systems, electronic meeting systems, document delivery systems, and
workflow management systems can facilitate the sharing of information. Studies have
shown that such systems increase participation and result in better quality program
decisions since they are made by consensus and not by domination142. The
development of such information systems-enabled communities results in better
interpretation of information and greater group understanding. More importantly, it
enables equal participation at all levels and supports staff learning from each other
simultaneously (unlike traditional learning systems which are usually top-down and
time-consuming).
Information interpretation
Huber143 stated that organizational learning occur when organizations
undertake sense-making and information interpretation activities. The lessons of
experience are drawn from a relatively small number of observations in a complex,
changing ecology of routines. What has happened is not always obvious, and the
142 Senge et al., The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations. 143 Huber, "Organizational Learning: The Contributing Processes and the Literatures."
117
causality of events is difficult to untangle. Nevertheless people in organizations form
interpretations of events and come to classify outcomes as good or bad. Certain
properties of this interpretation of experience stem from features of individual
inference and judgment144. They make systematic errors in recording the events of
history and in making inferences from them. They use simple linear and functional
rules, associate causality with spatial and temporal contiguity, and assume that big
effects must have big causes. These attributes of individuals lead to systematic biases
in interpretation145. Organizations devote considerable energy to developing
collective understandings of history. They are translated into, and developed through,
story lines that come to be broadly, but not universally, shared146. Some of the more
powerful phenomena in organizational change surround the transformation of status-
quo and the redefinition of concepts through consciousness raising, culture building,
double-loop learning, or paradigm shifts147. Within the evaluation context,
interpretation of findings is strongly influenced by the political nature of the
organization148. Different groups in an organization often have different targets
related to a program and therefore evaluate the same outcome differently. As a result,
evaluation findings are likely to be perceived more negative or more mixed in
organizations than they are in individuals.
144 D. Kahnerman, P. Slovic, and A. Tversky, Judget under Uncertainty: Heuristics and Biases (New York: Cambridge University Press, 1982). 145 Ibid. 146 Daft and Weick, "Toward a Model of Organizations as Interpretation Systems." 147 Argyris and Schön, Organizational Learning: A Theory of Action Perspectives. 148 Levitt and March, eds., Organizational Learning.
118
Huber149 identifies four factors that affect shared interpretation of information:
(1) the uniformity of prior cognitive maps possessed by the organizational units, (2)
the uniformity of the framing of the information as it is communicated – uniform
framing is likely to lead to uniform interpretation, (3) the richness of the media used
to convey the information - Communications that can overcome different frames of
reference and clarify ambiguous issues to promote understanding in a timely manner
are considered more rich. Communications that take a longer time to convey
understanding are less rich, (4) the information load on the interpreting units -
interpretation is less effective if the information exceeds the receiving unit's capacity
to process the information adequately, and (5) the amount of unlearning that might
be necessary before a new interpretation could be generated. This is the process
through which learners discard knowledge – in this case, obsolete and misleading
knowledge, to facilitate the learning of new knowledge.
Organizational memory
Despite staff turnover, organization memory is built and sustained through
routines – like rules, procedures, technologies and cultures. Such routines not only
record organizational history but also shape its future path, and the details of that path
depend significantly on the processes by which the memory is maintained and
consulted. Organizations process vast amounts of information but not everything is
built into its memory. The transformation of experience into routines and the
recording of those routines involve costs. A good deal of experience is unrecorded 149 Huber, "Organizational Learning: The Contributing Processes and the Literatures."
119
either because the costs are too great or the organization’s assessment of the low
value of the experience towards future actions and outcomes. Examples of these are
when certain experiences are deemed to be an exception to a rule and are not viewed
as precedents for the future.
Organizations vary in the emphasis placed on formal routines. Innovation-driven
organizations rely more heavily on tacit knowledge than do bureaucracies150.
Organizations facing complex uncertainties rely on informally shared understandings
more than do organizations dealing with simpler, more stable environments. There is
also variation within organizations. Higher level managers rely more on ambiguous
information (relative to formal rules) than do lower level managers151. Despite these
differences experiential knowledge, whether in tacit form or in formal rules, is
recorded in an organization’s memory. However, it will exhibit inconsistencies and
ambiguities. Some of the contradictions are a consequence of inherent challenges of
maintaining consistency in inferences drawn sequentially from a changing
experience. Others reflect differences in experience, the confusions of history, and
conflicting interpretations of that history. These latter inconsistencies are likely to be
organized into deviant memories, maintained by subcultures, subgroups, and
subunits152. With a change in the fortunes of the dominant coalition, the deviant
memories become more salient to action.
150 W. G. Ouchi, "Markets, Bureaucracies, and Clans," Administrative Science Quarterly, no. 25 (1980). 151 R. L. Daft and R. H. Lengel, eds., Information Richness: A New Approach to Managerial Behavior and Organizational Design, Research in Organizational Behavior (Homewood, IL: JAI Press,1984). 152 J. Martin, Cultures in Organizations: Three Perspectives (New York: Oxford University Press, 1992).
120
Retrieval of memory depends on the frequency of use of a routine and its
organizational proximity. Recently and frequently used routines are more easily
evoked than those that have been used infrequently153. The effects of organizational
proximity stem from the ways the memory is linked to responsibility. As routines that
record lessons of experience are structured around organizational responsibilities they
can be retrieved more easily when referenced through those structures – which act as
advocates for those routines154. Availability is also partly a matter of the direct costs
of finding and using what is stored in memory. Information technology has reduced
those costs and made relatively complex organizational behavior economically
feasible, for example in the preparation of reports or presentations or the analysis of
financial statements155.
153 Linda Argote, Organizational Learning: Creating, Retaining and Transferring Knowledge (New York: Springer-Verlag, 1999). 154 Ibid. 155 Daft and Lengel, eds., Information Richness: A New Approach to Managerial Behavior and Organizational Design.
121
Evaluation Use and Organization Learning
Several authors argue that evaluation findings can have impact not only when
stakeholders adopts its conclusions directly, but also when they reflect on its potential
and possibilities. Reflecting on the types of evaluation uses – Cousins and
Leithwood156 opined that instrumental use results in single-loop learning whereas
conceptual use can bring about major shifts in understanding by promoting double-
loop learning. Caracelli and Preskill157 hypothesized that evaluation utilization has
significant potential for contributing to organizational learning and systematic
change. They suggest that including stakeholders in the planning and implementation
of the evaluation gives them opportunities to be reflective, share and build
interpretations (conceptual use) and finally place findings into action (instrumental
use). Levitt and March158 framed three organizational behaviors that promote learning
through activities.
1. Behavior in an organization is based on routines. Actions are driven by matching
existing procedures to situations rather than being intention drive-choices.
2. Organizational actions are history-dependent. Routines are based on
interpretations of the past more than anticipations of the future.
3. Organizations are oriented to targets -- their behavior depends on the relation
between the outcomes they observe and the aspirations they have for those
156 J. Bradley Cousins and Kenneth A. Leithwood, "Current Empirical Research on Evaluation Utilization," Review of Educational Research 56, no. 3 (1986). 157 Vaerie J. Caracelli and Hallie Preskill, "The Expanding Scope of Evaluation Use," New Directions for Evaluation, no. 88 (2000). 158 B. Levitt and J. G. March, "Organizational Learning," Annual Review of Sociology, no. 14 (1988).
122
outcomes. Sharper distinctions are made between success and failure than among
gradations of either.
Within such a framework, organizations are seen as learning by encoding
inferences from history into routines that guide behavior. The generic term "routines"
includes the forms, rules, procedures, conventions, strategies, and technologies
around which organizations are constructed and through which they operate. Routines
are independent of the individual actors who execute them and are capable of
surviving considerable turnover in individual actors. Routines are transmitted through
socialization, education, imitation, professionalization and personnel movement.
Evaluation is a key mechanism that allows an organization to assess these routines
and provide feedback for improvements. Levitt and March recognized that even
though routines are independent of individuals, to bring about changes in routines the
organization needs to involve not only the individuals who directly perform the
routines but also those who rely on it indirectly. The general expectation is that
evaluation utilization will become common when it leads to favorable routines.
Learning occurs best among individuals who regard the information they are
reviewing (i.e., evaluation findings) as credible and relevant to their needs. Involving
stakeholders in designing and conducting an evaluation helps assure their ownership
of, and interest in, its findings. Learning also occurs best among individuals who have
an opportunity to ask questions about evaluation methods, consider other sources of
123
information about the topic in question (including their own direct experiences), and
at the same time hear others’ perspectives.159
A learning approach to evaluation is contextually-sensitive and ongoing, and supports
dialogue, reflection, and decision making based on evaluation findings160. The
authors conclude that the primary purpose of an evaluation is to support learning that
can ultimately lead to effective decision making and improvement in department,
programmatic, and organization-wide practices. They argue that to achieve learning
the evaluation planning must:
• Consider the organizational context (stakeholders’ needs, political realities etc)
• Be conducted often enough to become organizational routines
• Actively engage stakeholder participation— in planning and interpretation
A learning approach can be taken with any kind of evaluation. The factors to consider
while learning from an evaluation is that the findings remain relevant and credible to
potential users and there are processes to facilitate action-oriented use. This means
establishing a balance between accountability and learning roles for evaluation.161
159 Rosalie T. Torres, "What Is a Learning Approach to Evaluation?," The Evaluation Exchange VIII, no. 2 (2002). 160 R. T. Torres and H. Preskill, "Evaluation and Organizational Learning: Past, Present and Future," American Journal of Evaluation 22, no. 3 (2001). 161 R.T. Torres, H. Preskill, and M.E. Piontek, Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations (Thousand Oaks, CA: Sage, 1996).
124
Chapter 3 provided a review of evaluation use and organizational learning theories.
The chapter also discussed the challenges in evaluation utilization – specific to the
NGO sector and answered several of research questions that guided this study. It also
highlighted themes that were built into the practitioner survey and summarized
below:
(1) The different types of uses and their relative importance
(2) The human factors that influence use - role of stakeholders; user biases and
interests;
(3) The evaluation factors that influence use – the quality, structure and content
and timing of evaluations
(4) The organizational factors that influence use – decision making models;
organizational learning frames; systems and tools to enable use.
The next chapter 4 presents the responses from the survey that add to the knowledge
gathered in the literature review. Together they informed the utility model presented
in chapter 5.
125
Chapter 4: Presentation of Survey Results
The purpose of this chapter is to present the results from the survey of 111 staff
from 40 NGOs. The data is presented in three sections correlating with the different
stages of an evaluation – planning; execution and follow-up. The survey was used to
collect information about how organizations use evaluations; how the factors that
trigger evaluations are applied throughout the lifecycle of an evaluation; and what
systems and processes currently support use and how can they be improved.
Note: the number at the beginning of each table represents the question on the survey
Stage 1: Evaluation Planning
The questions below attempt to understand how the concept of utilization is
incorporated in the planning of an evaluation. They explore around how respondents
define intended users, intended uses and the involvement of users in planning.
Table 4.1 and the corresponding Chart 4.1 show how respondents grouped intended
users. All selected Program Staff, highlight the importance of those working at the
program level as an essential user of findings. Respondents also indicated senior
management as an important user group with 81%. Donors came in third at around
66%. Fewer cited Board members (27%) and beneficiaries (27%).
126
Table 4.1 – Intended users grouping
#7: Who do you consider as a potential user of program evaluations? You
can make multiple selections.
Answer Options
Response
Percent
Response
Count
Program Beneficiaries 27.0% 30
Program Staff 100.0% 111
Senior Management 81.1% 90
Board 27.0% 30
Donors 65.8% 73
Issue Experts (outside the organization) 40.5% 45
Others (please specify) 0.0% 0
Chart 4.1 – Intended users grouping
27.0%
100.0%
81.1%
27.0%
65.8%
40.5%
Program Beneficiaries
Program Staff
Senior Management
Board
Donors
Issue Experts
127
The survey sought to identify each stakeholder groups’ involvement during the
evaluation planning phase. They were asked to select only one response per group
that closest represented the average. As seen in Table 4.2 program staff was involved
over 50% of the time while senior management was close with 50%. What stands out
is the involvement of Donors in planning an evaluation. They were in the lower range
(54% < 20% of the time). This is interesting as in the previous question they were
identified as the top three potential users groups of evaluations.
Table 4.2 – Involvement of potential users in planning an evaluation
#8: How often are potential users involved in planning an evaluation?
Answer Options
< 20%
of the
time
Between
20% -
50%
Between
50% -
80%
> 80%
of the
time
Program Beneficiaries 63% 23% 10% 5%
Program Staff 0% 18% 55% 27%
Senior Management 9% 48% 32% 12%
Board 81% 16% 3% 0%
Donors 54% 24% 16% 5%
Issue Experts (outside the organization) 84% 14% 3% 0%
In the next question, there is alignment among over two-thirds of the respondents on
the importance of involving potential users in planning an evaluation.
128
Table 4.3 – Importance of involving potential users
#9: What do you think of the following statement: "Evaluations get used
only if potential users are involved in the planning of the evaluation"
Answer Options
Response
Percent
Response
Count
Strongly Agree 46.0% 51
Somewhat Agree 31.0% 34
Somewhat Disagree 19.0% 21
Strongly Disagree 4.0% 4
100% 111
In the question on use, respondents anticipated using the evaluation results in a
variety of ways, the most common being for program improvement. Although
addressing donor needs was cited in the literature as a major reason for evaluations,
the responses reveal that respondents rank it on par or lower than program
improvement and assessing the impact of the organization. The survey also asked
respondents to provide an example of how the evaluation results were used. A total of
72 examples were provided by 111 respondents. Analysis reveals that 54% of the
examples can be classified as using results as a basis for direct action, 34% to
influence people’s thinking about an issue, 7% for donor compliance and 5%
129
pertained to understanding program measurement in general. Of the examples
grouped under direct action – 65% was to improve program processes, 21% to inform
strategic and program planning and 13% to make funding allocations and 1% to
reorganize staff. The “influence” examples consisted primarily of ways that the
organization used results to obtain or justify funding to donors. There were a few that
used it to inform the field, such as through conferences and publications.
Table 4.4 – Uses of program evaluations
#10: What are program evaluations mostly used for? Rank the following with
1 being most important and 4 being least important.
Answer Options 1 2 3 4
Response
Count
Program course correction 63 26 16 6 111
Report to funder/donor 11 42 58 0 111
Inform beneficiaries 11 0 5 95 111
Understand overall impact of
organization 21 42 35 13 111
130
Chart 4.2 – Uses of program evaluations
0%20%40%60%80%
100%120%
Responses
1 2 3 4Ranking
Increase eff iciency of program Fundraise/donor relationsInform beneficiaries Understand overall impact of organization
The next question provides an insight to understand the factors that influence use.
Responses indicate that involvement of senior management (which we could interpret
as key internal decision makers) (80%) and donors (91%) significantly increase use.
While a lack of interest among staff (58%) and poor quality (48%) of the evaluation
leads to low use. Respondents expressed a majority viewpoint (62%) that in the
absence of a policy of process to guide use findings are not utilized with might
signify that there may be an opportunity to increase use if there was a policy or
process in place. Also, resource constraint often cited as a reason for non-use in the
literature does not factor in the respondents’ view (85% consider it neutral).
131
Table 4.5 – Criteria that impact evaluation use
Question #11: How do you think the following criteria impact evaluation use? (A selection is
required for each item on the list).
Answer Options
Eval not
used
Neutral - no
impact on
use
Eval
used
Response
Count
Evaluation findings that are too critical of the
program 11 90 10 111
Low quality of the evaluation content and report 48 42 21 111
Recommendations are unclear or articulated
badly 53 48 10 111
Time and budget constraints within the
organization 21 85 5 111
Staff's lack of interest in the program or
evaluation 58 21 32 111
Involvement of senior management in the
evaluation 5 26 80 111
Involvement of program donors in the
evaluation 0 20 91 111
There is no process/policy to guide evaluation
use 62 27 22 111
132
Chart 4.3 – Criteria that impact evaluation use
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
Not used No impact on use Used
Evaluation findings that are toocritical of the program
Low quality in the presentation ofthe evaluation report
Recommendations are unclear orarticulated badly
Time and budget constraints w ithinthe organization
Staff 's lack of interest in theprogram or evaluation
Involvement of senior managementin the evaluation
Involvement of program donors inthe evaluation
There is no process/policy to guideevaluation use
Stage 2: Evaluation Implementation
The questions below attempt to understand how users and uses are factored during the
course of the evaluation.
On the question of how often respondents have participated in planning their program
evaluations, respondents indicated a high level of participate in the planning and
finalizing stages of an evaluation and relatively lower in the implementation. To
highlight here, while 48% respondents indicated in the previous question that low
quality of the evaluation results in non-use, in the table below we can infer that very
few of them are involved in designing the methodology of the evaluation which can
influence the quality and rigor of the study.
133
Table 4.6 – Participation in evaluation planning
#13: How often have you participated in each of the following planning activities of an
evaluation?
Answer Options Never or
very
little
Around
25%
Around
50%
Around
75%
Almost
all the
time
Setting evaluation objectives 0% 0% 19% 30% 51%
Selecting the evaluator 5% 10% 15% 29% 41%
Designing of methodology 10% 26% 38% 12% 14%
Conducting the evaluation 14% 9% 7% 51% 18%
Analyzing/interpreting the data 0% 4% 23% 30% 43%
Designing the report 10% 23% 41% 19% 7%
The next question explores the relative important of the various components in an
evaluation report. 83% of the respondents’ indicated the analysis and
recommendations as the most important aspect of a report. 67% of respondents
express a preference for follow-up steps targeting use.
134
Table 4.7 – Evaluation report interests
#17: Rank the following in their order of importance (1 being the most important
- you are required to assign a unique rank to each line): "In the evaluation report
of your program, you are interested in...."
Answer Options 1 2 3
Response
Count
The research methods 17 33 61 111
The analysis and recommendations 83 11 17 111
The follow-up of how you can use the
findings 11 67 33 111
Chart 4.4 – Evaluation report interests
0%10%20%30%40%50%60%70%80%90%
100%
Responses
1 2 3
RankFollow -up on use of f indingsAnalysis and recommendationsResearch methods
The responses below support the literature that says periodic and consistent
evaluations of programs promote use. This question was framed to gauge the
135
respondents’ consideration of timing as a factor that influences use. 57% indicated
that current evaluations are done only at the completion of projects. While a nearly
identical number (55%) indicated that mapping evaluations to key program decision
making cycles is ideal. While there is always a decision to be made at the end of
completion of the project on whether to continue funding; the responses on the ideal
model reflect a need to guide interim evaluations also around program decision-
points.
Table 4.8 – Program evaluation timing
#18: When should a program be evaluated to promote use of findings?
Answer Options
Ideal
Model
Avg
Current
Practice
More than once a year 3% 13%
Annually 10% 9%
At key program milestones 55% 22%
At the end of the program 32% 57%
While there is strong support for tailoring evaluation recommendations to users,
respondents seem more balanced when it comes to formatting multiple reports. This
relates well to the literature, which suggests that use is promoted if there is
information relevance and specificity. As for tailoring reports, if an organization has
136
an active information exchange internally then evaluation findings can be extracted
when there is a need into specific formats to meet user requirements. As respondents
preference indicates maintaining a minimal number of reports customized to users
can ensure uniform interpretation of findings.
Table 4.9 – Evaluation reports expectations
#16: How often do evaluation reports meet your expectations?
Answer Options
Response
Percent
Response
Count
Less than 20% of the time 9.5% 11
Between 20% - 50% 47.6% 53
Between 50% - 80% 33.3% 37
More than 80% of the time 9.5% 11
100% 111
137
Table 4.10 – Evaluation recommendations specificity #1 #14: "Evaluation recommendations must come with specific
recommendations for specific users"
Answer Options
Response
Percent
Response
Count
Strongly Agree 65.0% 72
Somewhat Agree 30.0% 33
Somewhat Disagree 5.0% 6
Strongly Disagree 0.0% 0
100% 111
Table 4.11 – Evaluation recommendations specificity #2 #15: "In order to promote use, there needs to be multiple versions of
the evaluation report - matching findings with user interests/needs"
Answer Options
Response
Percent Response Count
Strongly Agree 25.0% 28
Somewhat Agree 45.0% 50
Somewhat Disagree 30.0% 33
Strongly Disagree 0.0% 0
100% 111
138
Stage 3: Evaluation Follow-Up
The questions below attempt to understand the organizational context and extract the
contextual barriers to using evaluation findings.
The responses highlight the value stakeholders, in this case potential users’, place on
evaluation follow-up and the importance of allocating resources towards that activity.
This response is also in sync with what respondents expressed to Question #17 on the
importance of follow-up actions towards use within the report.
Comments underscored the significance of evaluation as a strategic management tool.
Respondents reflected that when used effectively, evaluations promote a culture of
organizational learning and enhance accountability for results. Some specific actions
called for the organization’s management to give careful consideration to evaluation
findings, recommendations and lessons learned.
Table 4.12 – Evaluation follow-up
#12: "The costs of investing in an evaluation follow-up process outweigh
the benefits"
Answer Options
Response
Percent
Response
Count
Strongly Agree 46.0% 51
Somewhat Agree 31.0% 34
Somewhat Disagree 19.0% 21
Strongly Disagree 4.0% 4
139
Responses related to decision-making models indicate a strong preference for a model
that allows for the team to create ideas and have discussions, but then have a
designated leader makes the final decision (66%). However, in practice there seems to
a strong split between this and a model where the designated leader makes all
decisions without consulting group members (40%). Literature informs us that such a
model is not conducive for group ownership of evaluation results and/or learning
from it.
Table 4.13 – Decision-making models
#19: Of the decision-making models below which do you think
promotes evaluation use? And which model is practiced within your
program? (you can select the same model for both questions)
Answer Options
Ideal
Model
Current
Practice
Decision by averaging team members' opinions 0.0% 0.0%
Decision by majority vote 4.0% 12.0%
Decision by team consensus 22.0% 12.0%
Decision made by authority after group
discussion 66.0% 32.0%
Decision made by authority without group
discussion 2.0% 40.0%
Decision made by evaluation expert / evaluator 6.0% 4.0%
140
Other (please specify) 0.0% 0.0%
Literature often cited the power and influence of donors with respect to evaluations
and program decision making. Respondents support this finding. However, they
ranked changes in the organizational mandate as the lead in driving program changes.
What this question does not infer is role of evaluation findings in influencing a
change in the organization’s mandate. Is there a link between evaluations and
organization level learning?
Table 4.14 – Drivers of program change
#20: What drives program changes? Rank the following with 1 being the
most important.
Answer Options 1 2 3 4
Response
Count
Change in organizational
mandate 63 18 12 18 111
Donor requests 23 53 12 23 111
Client/beneficiary requests 23 23 24 41 111
Evaluation findings 0 18 64 29 111
Other (please specify) 0 0 0 0 0
141
Chart 4.5 – Drivers of program change
0%
20%
40%
60%
80%
100%
120%
Responses
1 2 3 4
RankEvaluation f indingsClient/beneficiary requestsDonor requestsOrganizational mandate
The responses below indicate the opportunity this research presents to NGOs – even
if it does not lead to an organization-wide approach to evaluation use there might be
ways in which current practices and policies can be enhanced. The next set of
questions indicates the strong respondents’ preference for such a model that links
evaluation use to the organization level, going beyond program level engagement. An
observation here is that despite the strong history of evaluation use theorists,
pressures from donors and resources spent it seems few organizations are committed
to a formal evaluation system that maximizes use.
142
Table 4.15 – Prevalence of evaluation use process
#21: is there a process to evaluation use in your organization?
Answer Options
Response
Percent
Response
Count
Yes, we have a formal process where evaluation
reports are shared, reviewed, analyzed and
findings applied, where applicable.
16.0% 18
No, it's up to the individual staff members to do
as they please with the evaluation report. 24.0% 27
Some departments have a formal process some
don't. There is no organization-wide policy. 56.0% 62
Other (please specify) 4.0% 4
100% 111
Other (please specify)
“We have a formal policy but implementation is not as systematic as it
should be”
“we are establishing formal processes”
“No formal process (i.e. Policy) but still all items in point 1 still apply”
“program is stand-alone, so our use of evaluation reports is very
localized”
143
The following questions were open-ended to captured respondent feedback on
organizational processes that influence evaluation use.
#22: Please answer the following in the space provided using your own words:
What is the most effective tool or method to keep evaluation findings current in
organization memory?
All respondents answered this question. The comments are grouped into the following
five categories: Policy, Systems, Relevance, Accountability and Transparency.
Figure 4.1 – Tools to keep evaluation findings current in organization memory
Policy: 43% of comments fall under this category. Suggestions included formal
processes/organizational policy that offered structured guidance on how to
incorporate evaluation findings into future program planning; creation of a process
that continuously reviewed and adapted findings into next planning cycle.
144
Systems: 27% commented on the need for tools and technology that allow for storage
and easy retrieval of learning. The importance of investing in supporting systems that
builds efficiency into staff processes.
Accountability: 13% recommend building utilization into individual staff work plans.
Responses called for clear structures of responsibility within the organization to
successfully track compliance. Some proposed the “carrot and stick” approach. They
felt that improved quality and targeted dissemination of findings may be insufficient
to promote use. There needs to be incentive structures built into the system, or
penalties established for not considering use (recognizing that there could be
legitimate reasons for non-use).
Relevance: 7% suggested linking the role of evaluation and its findings to the overall
mission of the organization. While evaluations address specific program issues, tying
the findings to the strategic questions posed at the organization-level can increase the
relevance and acceptance of findings among staff.
Transparency: 5% suggested that sharing findings throughout the organization could
lead to informal learning and cross-checking that would keep findings in memory.
The comments ranged from just making the findings available in an easily accessible
platform to some who called for structured and targeted dissemination that guided
individual staff on their planning.
145
#23: Please complete this sentence: Any process or model that is adopted to increase
evaluation use MUST consider the following...
Respondents were asked to describe what process can increase use in their own
words. 96 comments were offered. The results are grouped into the following
categories: People, Systems and Organization.
Figure 4.2 – Processes that can increase use
Representation of ALL stakeholders
Resource allocation
Simple and practical
Commitment to use
SystemsPeople Organization
Flexibility
Quality
Buy-in from decision-makers
Ongoing Learning
People: Within this category, 56% of the comments identified stakeholder
involvement as critical. 32% emphasized that buy-in from decision-makers will
ensure that required resources are allocated for follow-up activities. The rest of the
comments included clarity of staff roles in use, access to evaluation experts and the
role of senior management and leadership as champions of evaluation use.
146
Systems: 42% of comments called for a system that can be easy to use and simple to
implement throughout the organization. 38% focused around a quality system and
12% reflected on a model that will be easy to maintain. Comments included the need
to manage “information overload”. Consulting users to identify preferred
communication style and desired content.
Organization: Majority of the comments (82%) related to the important of the
organization’s commitment to use findings and to learning, overall. Some comments
also related to the organization’s commitment of resources for evaluation follow-up
activities i.e.: dissemination of findings, building shared interpretation and tracking
utilization. Caution around the disclosure of negative or controversial evaluation
findings can obviously create difficulties for the organizations however a few
respondents called for the view that the long-term benefits of disclosure outweigh the
short-term setbacks. Greater disclosure can enhance the credibility of the organization
and boosts the validity of favorable findings. Respondents called for evaluation use to
become more reliable and systematic. As one respondent put it, organizations need to
emphasize that “learning is not optional”.
#24: Please provide ONE reason why you would or would not refer to
a past evaluation during program planning.
147
There were 83 responses to the “would refer” question and 91 for the “would not
refer” question. The responses are once again grouped, based on content analysis, into
categories that reflect a significant percentage of the responses.
Figure 4.3 – Reasons why evaluations get referred or not
Ongoing Program
Increased Issue Knowledge
WOULD REFER
High Quality Results
Organizational Practice
Concluded Program
Irrelevant Findings
WOULD NOT REFER
Lack ofGuidance
Capacity Constraints
Would refer when:
(1) The evaluation is conducted mid-stream of the program. (63%)
(2) The organization has a process and practice to use findings. (24%)
(3) The findings from the previous survey increased issue/program knowledge
(6%)
(4) The quality and content of the past evaluation is good. (7%)
Would not refer when:
(1) The program has concluded or is in its final stage. (19%)
(2) The findings are of poor quality and recommendations are not practical. (52%)
(3) There is no policy or process around evaluation follow-up and/or learning
(23%)
(4) There are time and resource constraints. (6%)
148
Chapter 5: The Utility Model
The purpose of this chapter is to present the evaluation use model that was developed
using the literature review and the data gathered. The chapter begins with an
explanation of the utility model. This is followed by a list of practical steps on how
NGOs can implement this model. Building on the review of literature, past practice
and the survey of practitioners this model is innovative in that it incorporates external
realities to the project that influence evaluations. Traditionally NGOs approach
evaluation and utilization within the preview of the specific program. Questions
revolve around what the evaluation seeks to accomplish; the data collection methods;
the qualification of the evaluator and the publication of the findings. While these are
all necessary steps to aid utilization this study has found them to be insufficient. The
utility model provides a unique insight into NGO evaluation practice by weaving two
key links into the current thinking – human and organizational factors.
Incorporating the intended users’ interests and capabilities increases how evaluation
gets used and re-used. And, linking it to the organization level shifts the view of
evaluation utilization from a narrow, program restricted lens to impact the
effectiveness of the entire organization. While the inclusion of these may already be
occurring in some organizations; based on the findings from this research there is a
strong dearth of understanding in the NGO community on how to increase evaluation
use. This model provides a list of factors that influence use and the practical steps that
can be implemented to increase use.
149
Explanation of Model
Figure 5.1: The Utility Model
Evaluation FactorsHum
an F
acto
rs
Organizational Factors
INCREASES USE ConceptualInstrumental
ProcessStrategic
Intended Users
Interests/Biases
Professional
Capabilities Eval
uatio
n Pr
oced
ures
Substance of
Information
Reporting
Org
aniz
atio
nal
cultu
re
Routines and
Processes
The Core is USE
Understanding how evaluations are used is the focal point of this model. Evaluation
findings serve three primary purposes: rendering judgments; facilitating
improvements and generating knowledge. These need not be conflicting purposes,
there can be overlap among them and in some cases an evaluation can strive to
achieve all three. What becomes important is to understanding the purpose of the
evaluation in order to determine intended uses.
150
Evaluations that seek judgment are summative in nature and ask questions that lead
to instrumental use. Did the program work? Were the desired client outcomes
achieved? Was implementation in compliance with funding mandates? Should the
program be continued or ended? In such evaluations, primary intended users are
donors, program staff and decision-makers closely related to the program -- who can
use findings for direct course corrections. Improvement-oriented evaluations on the
other hand are formative in nature. Instead of offering judgments they seek to
facilitate learning and makes things better. The questions tend to be more open ended
and lead to process and strategic use. What are the programs strengths and
weaknesses? What are the implementation challenges? What’s happening that wasn’t
expected? How are stakeholders interacting? How is the program’s external
environment affecting internal operations? Where are efficiencies realized? Intended
users of improvement-oriented evaluations tend to be donors, program managers,
senior management and Board.
Where evaluation findings contribute to increasing knowledge it invokes conceptual
use. This can be to clarify a model, prove a theory, generate patterns of success or
explore policy options. Conceptual use “enlightens” users often beyond the program
team and include Board, donors and the larger issue-area community. The knowledge
generated is used beyond the effectiveness of a particular program to policy
formulation in general in the form of sharing best practices. Studies of use also
indicate that individuals and organizations can learn through the process of an
evaluation, irrespective of the findings. An increasing prevalence and recognition of
151
such process use – like participants’ increased ownership of evaluation findings,
greater confidence in the evaluation process and in applying the results and evidence
of personal learning - combine to produce a net subsequent effect on program
practice.
Expanding out, the model frames three categories that influence use:
(1) Human
(2) Evaluation
(3) Organizational
As seen from the literature and survey, there are several factors that play a role in
increasing use. The eight framed in this model were drawn from these and further
developed to capture the key characteristics that influence use. The survey of
practitioners validated the importance of these factors and in particular contributed to
the developed of the organizational culture factor. Irrespective of the size or
complexity of the NGOs, this research puts forth the notion that if these eight factors
were triggered the organization would observe a significant increase in evaluation
use. So questions arise as to what happens if anyone of the factors is not present? The
eight factors were identified and developed as capturing a unique aspect of
influencing use, so this research proposes that all of the factors must be present to
maximize use. That said it is logical to conclude that implementing the processes and
procedures that enable these factors takes time and resources and organizations have
to balance this need with other competing priorities. The depth and breadth of
152
engaging these factors depends entirely on the complexity of the NGOs programs; the
magnitude of its operational and organizational structure and the availability of
resources (staff, time and funding). However, this model proposes that until all of the
factors have been engaged, at their fullest level within the context, the NGO is not
maximizing its evaluation utilization. Organizations can measure their progress by
conducting a stocktaking of the current state of these factors in the evaluation process
and track their growth over time to see if they track increased use. For example, at
baseline identifying intended users could be occurring in 60% of evaluations while an
organizational culture towards learning might be non-existent. Let us say,
hypothetically, continuing to strengthen the involvement of users while building a
learning organization could in a year increase these to 80% and 40% respectively,
while all other factors are held constant. Then the organization would be able to
observe increase in its evaluation utilization. Similarly when all of the factors are
engaged at the highest level within the organization it would have reached its
maximum utilization potential.
HUMAN FACTORS That Increase Use
Intended Users
Involving intended users in the evaluation process is a key factor of increasing use.
This was cited throughout the literature and also validated in the practitioner survey.
Given a range of potential stakeholders, use can be maximized when they represent
all levels of the program decision-making hierarchy as each group uses evaluation
153
findings differently. Excluding program staff could potentially affect instrumental
use; excluding leadership/senior management could affect strategic and conceptual
use. Another important user group identified was the donor. While most evaluations
are conducted at the behest of the donor, actively involving them in the evaluation
process suggests an increase in the use of findings within the organization – example:
to support ongoing fundraising efforts. Engaging donors also expands use outside the
organization to influence the larger issue area. Depending on the desired level of use,
intended users need to be identified, prioritized and included in the evaluation
planning.
The practitioner survey identified the following as the top three users of evaluation
findings: Program staff, senior management; donors. The graphic below illustrates
examples of how different decision-making groups can use evaluation findings.
Figure 5.2 – Evaluation use and decision-making groups
154
Evaluations assist NGO leadership and senior management to decide the future of
programs. Whether they are meeting objectives and advancing the mission of the
organization. Whether resources (staff and money) allocated to programs are
proportional to the impact of those programs. Understanding program and strategic
expansion are also informed by evaluations. On a programmatic level, evaluations
help staff track program progress and effectiveness. It also can highlight opportunities
for realignment of resources and leverage of opportunities to maximize results. For
funders, the key use of evaluation is to determine the continued support of a program.
But on a larger level it can also inform/educate donors on best practices and effective
interventions within their issue area of interest.
Interests and Biases
With the involvement of multiple users comes the challenge of balancing individual
and group interests in promoting use. The politics of use must be recognized and
managed during all phases of the evaluation. The survey respondents indicated that a
lack of interest in the evaluation by users results in the findings not being used.
Irrespective of the size of the program or the depth of the evaluation, user interests
may be divergent some focusing on efficient use of resources, others on the impact of
the program and actual results, still others on the process of evaluation and
organizational learning. Capturing and communicating intended uses while planning
for the evaluation can promote shared understanding and manage disappointments in
the final deliverable. It is important to pay special attention to the negative
predispositions by intended users. Some reasons for such a reaction could be past
155
critical evaluations; non involvement or cursory participation; earlier findings not
applied or time and cost constraints. Acknowledging these during initial user
consultation and planning can lead to a realistic framework for use. Finally, no matter
what the program size or complexity, if the potential for use of an evaluation among
stakeholders does not exist then organizations should seriously consider not
conducting the evaluation.
Professional Capabilities
The extent of administrative and organizational skills present in users influences
evaluation use. Some users may be organizers, some procrastinators or some unable
to get tasks finished. Additionally, the alignment of user capabilities with types of
uses can yield different results. For example, conceptual use requires an ability to
grasp and develop a new idea or method. Strategic use occurs when users are open to
new ideas or change. User ability and inclination to receive and process information
also affects use. For example if findings are shared electronically (impersonal) versus
in face-to-face or group meetings. While this aspect was not included in the
practitioner survey, the literature strongly supports that understanding user
capabilities can lead to better utilization planning. Organizations can focus training
for staff around frequently used procedures that influence use.
156
EVALUATION FACTORS That Increase Use
Evaluation Procedures
Once intended uses are identified the evaluation plan needs to reconcile these with the
evaluation objectives. Active participation by intended users is essential along with a
continual (multi-way) dissemination, communication and feedback of information
during and after the evaluation. By involving users in key evaluation decision-making
there is an increase in user ownership of results and application of uses. Participation
in the formulation and interpretation phases of the evaluation helps increase use by
increasing evaluation relevance and user ownership of results. Individuals and
organizations are more disposed to change if they are familiar with the information
and mentally prepared. The involvement of senior managers and decision-makers
traditionally has only been at the final reporting stage. In some cases, the sudden
exposure to proposed changes that are complex and politically challenging increases
the risk of rejection. The quality of research methods and the application of rigor also
influences intended users as evident from the survey. Within this context however, it
is important during evaluation planning to be mindful of the differentiation between
theoretical perceptions of rigor versus that of the user. For example: a user might be
more concerned with how beneficiaries were interviewed versus whether the answers
were statistically analyzed.
157
Substance of Information
Survey respondents strongly agreed that use is promoted if there is information
relevance and specificity. This becomes important when there are multiple groups of
intended users for one evaluation. A single evaluation report may not promote use at
all levels, so matching findings to users emerged as important. Building consistency
among users, while at the same time sharing relevant and pertinent information, can
present a challenge to organizations. Linking those who possess information to those
who need this information is what promotes organization-wide use. Combining
information from different programs findings leads not only to new information but
also to new uses. Timing of evaluations also emerges as an influencing factor – use
increases when release of findings coincides with key decision-making cycles. If
recommendations are made after the next project cycle has resumed it may have very
little instrumental use however there is always an opportunity for conceptual use that
links to the overall learning within the organization.
Reporting
When it comes to reporting evaluation findings the survey indicates that besides
targeted content the style or presentations of findings must also be targeted to users.
Time and again, excessive length and inaccessible language, particularly evaluation
jargon, are cited as reasons for non-use. Reports need to strike a balance between
building credibility to the process and the messages for action. Program level users
might value detailed statistical data to inform instrumental use while senior
158
management, Board and donors may seek a balanced mix of quantitative and
qualitative information to guide conceptual use. A balanced mix of graphics (tables,
charts, figures); technical presentation and non-technical narrative enhance use
potential of reports. If an organization has an active information exchange with peers
or issue networks then evaluation findings need to be presented in a specific format
that supports this shared use. Successful organizations are able to strike a balance
between user needs and the uniform interpretation of findings.
ORGANIZATIONAL FACTORS That Increase Use
Organizational Culture
As evident from the responses in the survey: policy (to promote evaluation use),
systems (to enable use), relevance (linking findings to organization mission),
accountability (part of staff work plan) and transparency (acceptance of findings)
highlight the close links between organization context and evaluation use. These
include processes that enable use; inclusive and participatory models of decision-
making; facilitated conceptual and strategic use (beyond programmatic use) and
organization-wide commitment to use. Evaluation findings might jeopardize funding
and future of the programs being evaluated. The extent of the organization’s tolerance
for failure and focus on learning will affect the extent of use. In an environment
where learning is encouraged and facilitated utilization of evaluations flourish. An
159
open commitment to use within the organization can shift potential users to become
actual users.
Routines and Processes
In any organization, overtime, program procedures and expectations get
institutionalized as routines making it work habit. By reviewing and restructuring
these routines NGOs can build an environment conducive to use. For example,
instituting processes to capture and retrieve memory contributes to periodic
reinforcement of findings and promotes cycles of use. Investing in systems can
facilitate the sharing of information and interpretation which in turn increase intended
user participation and result in higher utilization. The routine of conducting
stakeholder analysis prior to any evaluation planning is another example where
routines and processes can help reinforce the other factors that influence use.
Summarizing the model, the evidence unearthed in this study indicates the need to
consider external realities that play a significant role in influencing evaluation use.
While for decades organizations have focused on streamlining and refining the
evaluation process and methodology within a program context to increase use. Those
efforts have resulted only in marginal success. Without actively incorporating the
human and organizational factors outlined above NGOs will continue to struggle with
maximizing evaluation use and the gap between expended resources in conducting
evaluations to the resulting value of such efforts will continue to be disparate.
160
Steps to Implement the Model
This section explains how the above described utility model can be made operational.
It identifies the practical steps that organizations can take to trigger each of the eight
factors in the model. The table below presents an overview of these steps mapping
them to the factors. The columns represent the eight factors grouped within the three
categories: human, evaluation and organizational. The rows list the practical steps.
The X marks the factors that are triggered through a particular action step. Following
the table, there is a description of each step.
While analyzing the steps and mapping them to the factors it became clear that they
could be divided into two groups. Those action steps that happen at a program level
and those that happen at the organizational level. For example, the action of
conducting stakeholder analysis is done by staff that works closely at the individual
program level as this action will be unique to each program. On the other hand,
investing in technology and tools is an action that benefits all programs and is
implemented at an organizational level. By grouping the actions steps at the program
and organization levels it brings to the forefront the types of staff, the level of
engagement and the depth of resources that need to be involved in implementing
them. Some actions can be implemented immediately and do not incur significant
additional costs (example: define ongoing user engagement in the evaluation) while
some have to be factored into the organization’s long-term operations and budgetary
planning (example: staff training). Therefore this grouping can assist the NGO to
161
162
prioritize and customize the implementation of the action steps according to its needs
and resources. In mapping the relationship of which action triggers which action, this
table identifies a particular pattern. It allows for decision-makers to better capture
(and plan for) the action steps that lead from one to another, cascading forth to
increase the utilization of evaluations.
In other words, a particular action step may trigger more than one of the factors in
Table 5.1, with one step stimulating another. To take a relatively simple, linear
example:
1. mapping intended uses to intended users captures a user’s interests and biases
about a program, which might
2. result in presenting reports with specificity that make her attitude toward the
program more positive, which might in turn
3. facilitate reuse and lead her to take on the interpersonal role of a change agent
within her organization, which might
4. link to organizational learning and result eventually in reconsideration of
organizational policy
There could be several alternate interpretations of how these actions trigger the
factors that can result in different patterns. So it is important to note that this table
shows a set of relationships that is not finite and not linear. The larger interest is to
identify steps that trigger the eight factors that influence use. The action steps in this
study attempt to provide one approach. It is by no means the only method to achieve
the desired objectives.
163
Table 5.1 – Mapping practical steps to the factors that influence evaluation use
FACTORS THAT INFLUENCE EVALUATION USE
HUMAN EVALUATION ORGANIZATIONAL PRACTICAL STEPS TO TRIGGER THE FACTORS THAT INFLUENCE
USE Intended Users
Interests Biases
Professional Capabilities
Evaluation Procedures
Substance of Information
Reporting Organizational Culture
Routines and Processes
Conduct Stakeholder analysis to identify users
X X X X X
Map intended uses to intended users X X X
Get user buy-in on use and methods X X X X X X
Time evaluations to decision-cycles X X X X
Define on-going user engagement X X X X X
Present reports with specificity X X X X
Distribute to secondary users X X X
Facilitate reuse X X X X X X
AT PROGRAM
LEVEL
Link to organization learning X X X X X X X X
Explicit commitment to use X X X X
Staff training X X X X X Invest in technology and tools X X X X X
Allocate program level resources X X X X
AT ORGANZA-
TIONAL LEVEL
Provide incentives to use X X X X X X X
Practical Steps at the Program Level
At the program level, evaluation activity can be split into two phases: (a)
planning and execution and (b) follow-up. The steps below are grouped into
these two phases to assist organizations on when to engage in these activities.
Figure 5.3 – Practical Steps at the Planning and Execution Phase
• Conduct stakeholder analysis to identify users
This is a fundamental step to utilization. The process of identifying users
involves taking into account the varied and multiple interests;
information needs, abilities to process the evaluation findings and
political sensitivities within the organization. By involving multiple
users organization’s can overcome the effect of staff turnover – so the
departure of some will not affect utilization. Also, in the event of a large
scale turnover of intended users, the process of identifying new group of
users needs to be revisited. Although this might delay the evaluation
164
process it will payoff in eventual use. Starting with identifying users will
allow for providing specificity and relevant information at the reporting
stage.
• Map intended uses to intended users
Depending on the type of use that is desired the corresponding user
group must be involved. For example, to derive instrumental use
involvement of key program decision-makers becomes essential.
Similarly, for conceptual use it may require senior management
participation. Focusing on intended uses also helps balance the reality of
resource constraints as it is impossible for any evaluation to address all
the needs of each user. In this context it becomes imperative to make
deliberate and informed choices on how each user will use the findings
and prioritize the users and their uses. Mapping these during the planning
stage of an evaluation allows for negotiations leading to commitment and
buy-in from stakeholders ahead of time.
• Get user buy-in on use and methods
Intended users interests can be nurtured and enhanced by actively
involving them in making significant decisions about the evaluation. Use
can only occur when there is credibility with the findings. Understanding
and meeting user expectations on quality and rigor is essential.
Involvement increases relevance, understanding, and ownership, all of
165
which facilitate informed and appropriate use. Actively engaging users in
the planning and implementation of the evaluation also gives them
opportunities to be reflective, share and build interpretations (conceptual
use) and finally place findings into action (instrumental use). However,
within this context, the focus must still remain on quality and not
quantity – involving multiple users and identifying multiple uses does
not necessarily result in higher utilization. Conversations on use and
evaluation methods can also help identify training needs that users might
have to actively participate in evaluations. (For example: statistical
analysis to interpret quantitative data). At the organization level,
acknowledging user bias and engaging in open conversation on
conflicting interests builds a healthy practice towards collective learning.
• Time evaluations to decision-cycles
In projects with multiple donors, decision-making milestones may be
varied. Conducting multiple evaluations to correspond with these
milestones might not be practical. Focusing on the objective – which is
to maximize use – will allow organizations to structure evaluations at
intervals that are meaningful to multiple users and tied to critical
decision-making cycles. Intended uses also can guide when evaluations
are conducted. Mid-term evaluations might be necessary for programs
that allow for course-corrections; whereas an evaluation at the end of the
program might be used to feed directly into subsequent planning.
166
• Define ongoing user engagement
Engagement of users must be factored into the entire cycle of the
evaluation. While they have an active role during planning; it might be
critical to keep some users apprised of the progress regularly. In complex
evaluations, maintaining such engagement can ensure that users start to
gain visibility to emerging key learning that could significantly shift the
future of the program. Once again, the evaluation planners must seek to
balance this need with ensuring the evaluation does not get bogged down
in conflicts among multiple user needs. Priority of intended users and
uses in planning can help guide this engagement. Prioritization also helps
focus on users who may have specialized skills or capabilities to engage
in certain aspects of the evaluation. Also critical is to ensure that the
engagement of users, outlined during the planning phase, is adhered to
during the execution. Specifically called out here is the step of involving
users in the interpretation of findings. Just as getting user buy-in on the
research methods during the planning stage was important, how data is
interpreted and presented also benefits from user engagement.
• Present reports with specificity
The process of sharing and targeted dissemination of findings plays a
key role in ensuring intended uses are facilitated. Whether it is through
information technology or in a meeting, engaging users immediately
following the evaluation allows for discussion and decisions on use.
167
Allowing for this conversation and debrief creates a learning loop for the
users who were involved in the planning, execution and follow-up to the
evaluation. While writing multiple reports is not reasonable; presenting
findings in a way that allows different groups of users to absorb the
recommendations and take action is invaluable. This can be captured in
the Terms of Reference of the evaluation to ensure that the variety of
reporting needs is clearly identified ahead of time.
The evaluation follow-up phase is when the evaluation is completed and
the reports shared with primary users. The steps below explain what
needs to happen subsequently to expand the reach of the evaluation
findings can keep the learning current to facilitate reuse.
168
Figure 5.4 – Practical Steps at the Follow-up Phase
• Distribute to secondary users
Once an evaluation is completed and the findings are shared with
intended users, there remains a window of opportunity to expand the
learning to a new set of users – those not directly connected with the
program but who can benefit from the recommendations. These users
can also be external to the organization like partners in the issue area;
academics and targeted messaging for the general public. This action can
assist NGOs convert program learning into a marketing and fundraising
tool. In the current global economic crisis, with NGOs facing
unprecedented financial challenges, it becomes imperative that they have
to use evaluations to effectively allocate resources and maximize impact.
NGOs can also find ways to share evaluation findings within their sector
to leverage opportunities. Given that the funding environment is highly
competitive complete transparency by NGOs may not always be
rewarded by donors. However, NGOs can share evaluation findings with
169
peer-networks that can collectively leverage resources for the issue or
strengthen the movement for their cause. Publishing reports on websites;
presenting findings through workshops and conferences and targeted
media communications can enable reaching a wider set of users.
• Facilitate reuse
Although evaluations provide a snap-shot in time, the findings and
learning can continue to inform program managers. Putting in place
processes and routines that encourage review of past evaluations and re-
use of findings where applicable extends the return-on-investment of an
evaluation. One step is requiring the review of the most recent evaluation
while planning any changes to the program cycle. This creates a formal
process for staff to reconnect with the findings.
• Link to organization learning
Organizations are seen as learning by encoding inferences from history
into routines that guide behavior. Evaluation is a key mechanism that
allows an organization to assess these routines and provide feedback for
improvements. By extracting key learning from a program-level and
linking it to the higher-level objectives of the organization, NGOs can
track how its numerous programs are contributing to accomplishing the
mission. Involving users who do not work directly with the program
allows for the findings to be expanded beyond a narrow scope. Also,
170
creating a network of users, across departments or functionality enables
the cross-pollination of findings and creates linkages throughout the
organization.
Practical Steps at the Organization Level
• Commitment to use
Requiring a commitment from leadership to provide an accountability
framework that leads to increased trust and build on shared values of
learning. Emphasizing the value of learning, regardless of what the
evaluation results show, help staff be astute information users rather than
hold on to prior positions. Users develop a long-term view of learning,
improvement and knowledge use; whereby short-term negative results
are less threatening when placed in a longer term context of ongoing
development. Processes that engage intended users can help to manage
internal conflicts around resources through conversations on how
evaluation results ultimately benefits the organization’s beneficiaries.
• Staff training
Training stakeholders and potential users in evaluation methods and
utilization processes addresses both the short-term and long-term uses.
Making decision makers more sophisticated about evaluation can
contribute to greater use over time. Different intended users will bring
varying perspectives to the evaluation which will affect their
171
interpretation. Users need the skills that help them differentiate between
analysis, interpretation, judgment and recommendations. By placing
emphasis on organizational learning, action research, participatory
evaluation and collaborative approaches the evaluation process can
defuse fear of and resistance to negative findings. Training also can be
directed towards improving the quality and rigor of evaluations.
• Invest in technology and tools
An almost universal weakness in NGOs was identified as their limited
capacity to learn, adapt and continuously improve the quality of what
they do. There is an acute need for systems which ensure that they know
and learn from what they are achieving and then apply what they learn
(Deutero-learning). The use of technology like groupware tools,
Intranets, e-mail, and bulletin boards can facilitate the processes of
information gathering (e.g.: identifying users), distribution (e.g.: sharing
findings) and interpretation (e.g.: linking findings to intended uses). IS
also strengthens the elements of the Organizational Memory so
evaluation findings can be shared and used over time. However,
technology must not be seen as a one-stop solution to utilization. There
is often a strong tendency to design IS solutions around supply side
criteria – information available – rather than a clear understanding of the
way information is actually used. IS can be a highly effective tool that
can allow for increased efficiency of resources (money; staff time) and
172
guide effective interpretation of information. However, it must not be
viewed as a substitute to conventional information sharing approaches.
Technology fixes can often mask the more complex and structural
barriers in organizations to evaluation use like political conflicts,
ineffective decision-making models and limited staff
competencies/skills. It is not enough to have trustworthy and accurate
information; staff needs to know how to use information to weigh
evidence, consider contradictions and inconsistencies, articulate values
and examine assumptions.
• Allocate program level resources
The allocation of adequate resources emerged as the key impediment to
promoting use. This included resources towards systems and technology,
staff skills training and to support post-evaluation follow-up. Some
NGOs have taken the issue of resources beyond the organization – to
educate and engage donors in the opportunity and the need to support
strengthening evaluation use infrastructure. On action item can be to
encourage donors to add an evaluation utilization component to program
delivery costs. NGOs should see learning as an essential component of
their operations and must take the necessary steps to allocate a
percentage of their general operating budget for systems that support
evaluation use. NGOs can increase use by ensuring that evaluation
follow-up is an integral part of its operations and investing resources to
173
build systems and process to support it. Dedicated follow-up individuals,
the developed of evaluation skills, clear allocation of responsibility and
specific mechanisms for action increase the likelihood of evaluation use,
particularly if follow-up was planned from the beginning of the
evaluation.
• Provide incentives
Incentives can encourage use. Tying evaluation use and learning to
individual performance measurements encourage staff to actively
participate in the process. Recognizing that only individuals can act as
agents of learning organizations must create roles, functions, and
procedures that enable staff to systematically collect, analyze, store,
disseminate, and use information relevant to their performance. Finally,
it is important to cultivate evaluation as a leadership-function of all
managers and program directors in the organization. Then the person
responsible for the evaluation plays a facilitative, resource and training
function in support of managers rather than spending time actually
conducting the evaluation. In this framework, evaluation becomes a
leadership responsibility focused on decision-oriented use rather than a
data-collection task focused on routine internal reporting. Empowering
managers to identify users and uses not only nurtures accountability but
also makes the evaluation process thoughtful, meaningful and credible.
174
This chapter presented an evaluation utility model that identified eight factors
that influence use. This was followed by a list of action steps that organizations
can take to trigger these factors and operationalize the model both at a program
and organizational level. While the focus was to develop a model that enhances
use, the final product also succeeds in limiting the barriers to use identified in
the literature review chapter. This model adds to the knowledge of evaluation
use in NGOs by expanding its focus from being restricted to the program level
to include the external realities at the organization level.
175
Chapter 6: Conclusion
This study began with the purpose of understanding the fundamentals of
evaluation use. How do we know there is use? What helps and hinders use?
Within the program evaluation context in the NGO sector how do these factors
manifest themselves? What can be done to improve utilization?
Based on independent research and the review of literature, an evaluation utility
model was developed. This model presents a fundamental shift in how NGOs
must approach program evaluation. In order to maximize use it is no longer
sufficient to focus on program level processes. Evaluation use is a multi-
dimensional phenomenon that is interdependent with organizational context,
systems and evaluation practice. Within this context, the utilization process is
not a static, linear process – but one that is dynamic, open and multi-
dimensional – driven by relevance, quality and rigor. The model outlined
attempts to capture this environment focused on the central premise that whether
an evaluation is formative or summative, internal or external, scientific,
qualitative or participatory the primary reason for conducting evaluations is to
increase the rationality of decision-making.
Embedding the principles of use throughout the lifecycle of an evaluation
enhances utilization. The responsibility of evaluation lies in identifying the
strengths and weaknesses of programs, which it can do extremely well, and in
176
facilitating utilization, which it has been doing less well. Serious participation
and a far greater focus on the intended user and uses would help to expose the
practice of inappropriate or ritual evaluation and prevent evaluation further
contributing to the current mistrust and saturation in the sector. It is equivalent
to mapping how the constructs from OL – acquisition, distribution,
interpretation and memory – are applied within the lifecycle of an evaluation
with utilization as a focus. The utility model revealed that influencing factors
extend to include the larger context of organizational behavior and learning. The
finding from evaluations must be transferred from a written report to the agenda
of managers and decision-makers. The challenge within the nonprofit sector is to
make evaluation utilization an essential function of its operations – similar to
accounting practices. While in the past decade, there has been a paradigm shift
in NGOs to dedicate resources and build their evaluation practice; they now
need to complete this transformation and link findings to learning at an
organization level.
At present, there might also be a sense within the sector of inertia generated by
an overload of information, systems and policies. Evaluation itself may be
inadvertently contributing to the workload. Given this, the decision to carry out
an evaluation should itself be considered as part of an information prioritization
process by all stakeholders. Extending the utilization principle to even before an
evaluation is commissioned may allow for staff to absorb existing information
and identify how new information will increase overall effectiveness.
177
Far from being a discrete operation, evaluation use must be seen as being at the
heart of the organizational learning process. Developing the “virtuous circle” of
linking evaluations use to learning to effectiveness requires an explicit
commitment in all levels of the organization. What is evident is that evaluation
utilization is no longer an option. It is essential if the NGO community is to
deliver on the ambitious targets it has set itself. This research concludes that the
utility model presented moves the dialogue on utilization further than it has been
and positions organizations, wherever they are in the continuum of use, to
maximize their results.
178
Recommendations for Future Research
Blending theory and practitioner feedback this research provides a model that
can increase evaluation use. Even so, if you take each step outlined in isolation
there might be challenges to implementation within any specific organization.
Whether the application is at big international NGOs or small ones at the
community level, the model outlined in this research is less about a universal
application but more about what can be done, however small, to increase
utilization within the existing context. The diversity among agencies includes
their background, institutional context, specific priorities, target audiences and
objectives. Future researchers could test this model through in-depth case
studies among diverse NGOs. If the eight essential factors in the model were
triggered would there be increased utilization? How would these apply in a
small, community based NGO versus a big, international NGO? How critical is
the organizational learning environment to effective use?
This research has presented information that supports the premise for evaluation
utilization acknowledging the complexity of the factors that influence use and
the systems that enhance it. However, in the end, this approach to evaluation use
must also be judged by its usefulness. Experimental research on whether the
practical steps outlined in this research can be collectively implemented and do
they result in increased use can only add clarity and deeper understanding
toward evaluation utilization in NGOs. This model was developed with an in-
179
depth review of literature and a survey within the NGO sector. It remains to be
seen if the model holds firm when it is tested in organizations that did not
participate in this research and/or operate in contexts different from the survey
respondents. Also, while the context of the research was NGOs, is this model a
reflection of evaluation use in any sector? Can it be extrapolated, wholly, to
other types of institutions? For example, how would the model work within the
Academic sector? Do some factors become more important in those settings?
Another opportunity for further research, given the current economic crisis and
dire straits under which NGOs are operating, might be a need to understand how
NGOs can leverage existing resources and partnerships to advance evaluation
use. This research has indicated that pooling resources for evaluation could lead
to increased use and promote shared learning. Research on how these networks
can be created/facilitated; maintained and leveraged for the purpose of sharing
evaluation resources and findings can be beneficial for the sector.
180
REFERENCE LIST
Alkin, M. C. A Guide for Evaluation Decision Makers. Newbury Park, CA: Sage, 1985.
Alkin, M. C., J. Kosecoff, C. Fitzgibbon, and R. Seligman. "Evaluation and Decision Making: The Title VII Experience." In CSE Monograph No. 4. Los Angeles: UCLA Center for the Study of Evaluation, 1974.
Alkin, Marvin C. Debates on Evaluation. Newbury Park, California: Sage Publications, 1990.
Alkin, Marvin, Richard Daillak, and Peter White. Using Evaluations: Does Evaluation Make a Difference? Beverly Hills: Sage Publications, 1979.
Alkin, Marvin, and Coyle Karin. "Thoughts on Evaluation Utilization, Misutilization and Non-Utilization." Studies in Educational Evaluation 14, no. 3 (1988): 331-40.
ALNAP. "Humanitarian Action: Learning from Evaluation." London: Overseas Development Institute, 2001.
"The American Journal of Evaluation." aje.sagepub.com.
Anderson, Scarvia B., and Samuel Ball. The Profession and Practice of Program Evaluation. San Francisco: Jossey-Bass, 1978.
Argote, Linda. Organizational Learning: Creating, Retaining and Transferring Knowledge. New York: Springer-Verlag, 1999.
Argyris, Chris, Robert Putnam, and Diane McLain Smith. Action Science: Concepts, Methods and Skills for Research and Intervention. San Francisco: Jossey-Bass, 1985.
Argyris, Chris, and Donald Schön. Organizational Learning Ii: Theory, Method and Practice. Reading, MA: Addison-Wesley, 1996.
———. Organizational Learning: A Theory of Action Perspectives. Reading, MA: Addison-Wesley, 1978.
Ayers, Toby Diane. "Stakeholders as Partners in Evaluation: A Stakeholder-Collaborative Approach." Evaluation and Program Planning no. 10 (1987): 9.
181
Berthoin-Antal, Ariane, Meinolf Dierkes, John Child, and Ikujiro Nonaka. Handbook of Organizational Learning and Knowledge: Oxford University Press, 2001.
Brabent, Koenraad Van. "Organizational and Institutional Learning in the Humanitarian Sector: Opening the Dialogue." London: Overseas Development Institute, 1997.
Brett, Belle, Lynnae Hill-Mead, and Stephanie Wu. "Perspectives on Evaluation Use and Demand by Users: The Case of City Year." New Directions for Program Evaluation no. 88 (2000).
Britton, Bruce. "The Learning Ngo." INTRAC Occasional Paper Series no. 17 (1998).
Campbell, Donald T., and Julian C. Stanley. Experimental and Quasi-Experimetal Designs for Research. Chicago: Rand McNally, 1963.
Caracelli, Vaerie J., and Hallie Preskill. "The Expanding Scope of Evaluation Use." New Directions for Evaluation no. 88 (2000).
Carlsson, Kerker, Gunnar Kohlin, and Anders Ekbom. The Political Economy of Evaluation: International Aid Agencies and the Effectiveness of Aid. New York: St. Martin's Press, 1994.
Carson, Emmet D. "Foundations and Outcome Evaluation." Nonprofit and Voluntary Sector Quarterly 29, no. 3 (2000): 479-81.
Community, Alliance for a Global. "The Ngo Explosion." Communications 1, no. 7 (1997).
Cousins, J. Bradley, and Kenneth A. Leithwood. "Current Empirical Research on Evaluation Utilization." Review of Educational Research 56, no. 3 (1986): 331-64.
Cronbach, L. J. Designing Evaluations of Educational and Social Programs. San Francisco: Jossey-Bass, 1982.
Daft, R. L., and R. H. Lengel, eds. Information Richness: A New Approach to Managerial Behavior and Organizational Design. Edited by L. L. Cummings and B. M. Straw, Research in Organizational Behavior. Homewood, IL: JAI Press, 1984.
182
Daft, R. L., and K. E. Weick. "Toward a Model of Organizations as Interpretation Systems." The Academy of Management Review 9, no. 2 (1984): 284-95.
Davis, H. R., and S. E. Salasin, eds. The Utilization of Evaluation. Edited by E. L. Struening and M. Guttentag. Vol. 1, Handbook of Evaluation Research. Beverly Hills: Sage Publications, 1975.
Desai, Vandana, and Robert Potter. The Companion to Development Studies. London: Arnold, 2002.
Development, Organization for Economic co-operation and. "Evaluation Feedback for Effective Learning and Accountability." In Evaluation and Effectiveness, edited by Development Assistance Committee. Paris: OECD.
Dewey, J. How We Think: A Restatement of the Relation of Reflective Thinking to Educative Process. Lexington, MA: D.C. Heath, 1960.
Dibella, Anthony. "The Research Manager's Role in Encouraging Evaluation Use." Evaluation Practice 11, no. 2 (1990).
Earl, Sarah, Fred Carden, and Terry Smutylo. Outcome Mapping: Building Learning and Reflection into Development Programs. Ottawa: The International Development Research Center, 2001.
Edwards, Michael, and David Hulme. Beyond the Magic Bullet: Ngo Performance and Accountability in the Post-Cold War World. Connecticut: Kumarian Press, 1996.
Engel, P., C. Carlsson, and A. van Zee. "Making Evaluation Results Count: Internalizing Evidence by Learning." In ECDPM Policy Management Brief No. 16. Maastricht: European Centre for Development Policy Management, 2003.
"The Evaluation Center." www.wmich/edu/evalctr/.
Fiol, C. M., and M. A. Lyles. "Organizational Learning." The Academy of Management Review 10, no. 4 (1985): 803-13.
Fisher, Julie. Nongovernments: Ngos and Political Development of the Third World. Connecticut: Kumarian Press, 1998.
183
Fowler, A. Striking a Balance: A Guide to Enhancing the Effectiveness of Non-Governmental Organizations in International Development. London: Earthscan, 1997.
Fox, Jonathan, and David Brown. The Struggle for Accountability. Cambridge, MA: MIT Press, 1998.
"The Grameen Bank." http://www.grameen-info.org/bank/GBdifferent.htm
Greene, J. C. "Technical Quality Vs. User Responsiveness in Evaluation Practice." Evaluation and Program Planning 13, (1990): 267-74.
Greene, Jennifer C. "Stakeholder Participation and Utilization in Program Evaluation." Evaluation Review 12, no. 2 (1988): 91-116.
Hall, Michael H., Susan D. Phillips, Claudia Meillat, and Donna Pickering. "Assessing Performance: Evaluation Practices and Perspectives in Canada’s Voluntary Sector." edited by Norah McClintock. Toronto: Canadian Centre for Philanthropy, 2003.
Hatry, Harry P., and Linda M. Lampkin. "An Agenda for Action: Outcome Management for Nonprofit Organizations." Washington DC, The Urban Institute, 2001.
Henry, Gary, and Melvin Mark. "Beyond Use: Understanding Evaluation's Influence on Attitudes and Actions." American Journal of Evaluation 24, no. 3 (2003): 293-314.
Howes, M. "Linking Paradigms and Practise, Key Issues in the Appraisal, Monitoring and Evaluation of British Ngo Projects." Journal of International Development 4, no. 4 (1992).
Huber, George P. "Organizational Learning: The Contributing Processes and the Literatures." Organization Science 2, no. 1 (1991): 88-115.
Hudson, Bryant, and Wolfgang Bielefeld. "Structures of Multinational Nonprofit Organizations." Nonprofit Management and Leadership 9, no. 1 (1997).
"Internal Revenue Service - Charities and Non-Profits (Extract Date October 4, 2005)." http://www.irs.gov/charities/article/0,,id=96136,00.html.
Johnson, Burke R. "Toward a Theoretical Model of Evaluation Utilization." Evaluation and Program Planning 21, (1998): 93-110.
184
Johnson, D.W., and F.P. Johnson. Joining Together: Group Theory and Group Skills. Boston: Allyn and Bacon, 2000.
Kahnerman, D., P. Slovic, and A. Tversky. Judget under Uncertainty: Heuristics and Biases. New York: Cambridge University Press, 1982.
King, J.A. "Research on Evaluation and Its Implications for Evaluation Research and Practice." Studies in Educational Evaluation 14, (1998): 285-99.
Kirkhart, K. E. "Reconceptualizing Evaluation Use: An Integrated Theory of Influence." New Directions for Evaluation no. 88 (2000).
Krone, K. J., F. M. Jablin, and L. L. Putnam, eds. Communication Theory and Organizational Communication: Multiple Perspectives. Edited by F. M. Jablin, L. L. Putnam, K. H. Roberts and L. W. Porter, Handbook of Organizational Communication. Newbury Park, CA: Sage, 1987.
Letts, Christine. High Performance Nonprofit Organizations: Managing Upstream for Greater Impact. New York: Wiley, 1999.
Levin, B. "The Uses of Research: A Case Stuffy in Research and Policy." The Canadian Journal of Program Evaluation 2, no. 1 (1987): 44-55.
Levitt, B., and J. G. March. "Organizational Learning." Annual Review of Sociology no. 14 (1988): 319-40.
Levitt, B. S., and J. G. March, eds. Organizational Learning. Edited by M. D. Cohen and L. S. Sproull, Organizational Learning. Thousand Oaks, CA: Sage, 1996.
Light, Paul C. Making Nonprofits Work: A Report on the Tides of Nonprofit Management Reform. Washington, DC: The Aspen Institute Brooking Institution Press, 2000.
Lincoln, Y., and E. Guba. Naturalistic Inquiry. Thousand Oaks, CA: Sage Publications, 1985.
Lindenberg, Marc, and Bryant Coralie. Going Global: Transforming Relief and Development Ngos: Kumarian Press, 2001.
Ludin, Jawed, and Jacqueline Williams. Learning from Work: An Opportunity Missed or Taken? London: BOND, 2003.
185
March, J. G., and J. P. Olsen. Ambiguity and Choice in Organizations. Bergen: Universitetsforlaget, 1976.
Mark, Melvin, and Gary Henry. "The Mechanisms and Outcomes of Evaluation Influence." Evaluation 10, no. 1 (2004): 35-57.
Martin, J. Cultures in Organizations: Three Perspectives. New York: Oxford University Press, 1992.
Mathison, S. "Rethinking the Evaluator Role: Partnerships between Organizations and Evaluators." Evaluation and Program Planning 17, no. 3 (1994): 299-304.
McNamara, Carter. Field Guide to Nonprofit Program Design, Marketing and Evaluation. Minneapolis: Authenticity Consulting, 2003.
Mott, Andrew. "Evaluation: The Good News for Funders." Washington, DC: Neighborhood Funders Group, 2003.
Mowbray, C.T. "The Role of Evaluation in Restructuring of the Public Mental Health System." Evaluation and Program Planning 15, (1992): 403-15.
Murray, Vic. "The State of Evaluation Tools and Systems for Nonprofit Organiations." New Directions for Philanthropic Fundraising no. 31 (2001): 39-49.
Neuendorf, Kimberly A. "The Content Analysis Guidebook Online." <http://academic.csuohio.edu/kneuendorf/content/>. (2007)
Nevis, E. C., A. J. DiBella, and J. M. Gould. "Understanding Organizations as Learning Systems." Sloam Management Review 36, no. 2 (1995): 75-85.
"The Nonprofit Sector in Brief - Facts and Figures from the Nonprofit Almanac 2007." (2006), http://www.urban.org/UploadedPDF/311373_nonprofit_sector.pdf.
Ouchi, W. G. "Markets, Bureaucracies, and Clans." Administrative Science Quarterly no. 25 (1980): 129-41.
Owen, J.M., and F.C. Lambert. "Roles for Evaluation in Learning Organizations." Evaluation 1, no. 2 (1995): 237-50.
186
Patton, M.Q. "Development Evaluation." Evaluation Practice 15, no. 3 (1994): 311-19.
———. Utilization-Focused Evaluation. 2nd edition ed. Beverly Hills, CA: Sage, 1986.
Patton, Michael Quinn. Utilization Focused Evaluations. Beverly Hills, CA: Sage Publications, 1997.
Plantz, Margaret C., Martha Taylor Greenway, and Michael Hendricks. "Outcome Measurement: Showing Results in the Nonprofit Sector." New Directions for Program Evaluation no. 75 (1997): 15-30.
Popper, M., and R. Liptshitz. "Organizational Learning Mechanisms: A Cultural and Structural Approach to Organizational Learning." Journal of Applied Behavioral Science 34, (1998): 161-78.
Powell, Mike. Information Management for Development Organisations. 2nd ed, Oxfam Development Guidelines Series. Oxford: Oxfam, 2003.
Preskill, H. "Evaluation's Role in Enhancing Organizational Learning." Evaluation and Program Planning 17, no. 3 (1994): 291-97.
Putte, Bert Van de. "Follow-up to Evaluations of Humanitarian Programmes." London: ALNAP, 2001.
"Research and Policy in Development (Rapid)." Overseas Development Institute, http://www.odi.org.uk/RAPID/.
Riddell, R. C., S. E. Kruse, T. Kyollen, S. Ojanpera, and J. L. Vielajus. "Searching for Impact and Methods: Ngo Evaluation Synthesis Study." OECD/DAC Expert Group, 1997.
Riddell, R.C. Foreign Aid Reconsidered. Baltimore: Johns Hopkins Press, 1987.
Rosenbaum, Nancy. "An Evaluation Myth: Evaluation Is Too Expensive." National Foundation for Teaching Entrepreneurship (NFTE), http://www.supportctr.org/images/evaluation_myth.pdf.
Rutman, Leonard. Evaluation Research Methods: A Basic Guide. 2d.ed. Beverly Hills, CA: Sage Publications, 1984.
187
Scriven, M. S., ed. Evaluation Ideologies. Edited by G. F. Madaus, M. Scriven and D. L. Stufflebeam, Evaluation Models: Viewpoints on Educational and Human Service Evaluation. Boston: Kluwer-Nijhoff, 1983.
Senge, P. M., Charlotte Roberts, Rick Ross, George Roth, Bryan Smith, and Art Kleiner. The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations. New York: Currency/Doubleday, 1999.
Senge, Peter. The Fifth Discipline: The Art and Practice of the Learning Organization. New York: Doubleday, 1990.
Shadish, W.R., T.D. Cook, and L.C. Leviton. Foundations of Program Evaluation: Theories of Practice. Newbury Park, CA: Sage Publicaitons, Inc., 1991.
Shrivastava, P. "A Typology of Organizational Learning Systems." Journal of Management Studies 20, no. 1 (1983): 7-28.
Shulha, Lyn M., and J. Bradley Cousins. "Evaluation Use: Theory, Research and Practice since 1986." American Journal of Evaluation 18, no. 1 (1997): 195-208.
SIDA. "Are Evaluations Useful? Cases from Swedish Development Co-Operation.": Swedish International Development Agency, 1999.
Smillie, Ian, and John Hailey. Managing for Change. London: Earthscan, 2001.
Stevens, C. L., and M. Dial, eds. What Constitutes Misuse? Edited by C. L. Stevens and M. Dial, New Directions for Program Evaluation: Guiding Principles for Evaluators. San Francisco: Jossey-Bass, 1994.
Torres, R. T., and H. Preskill. "Evaluation and Organizational Learning: Past, Present and Future." American Journal of Evaluation 22, no. 3 (2001): 387-95.
Torres, R.T., H. Preskill, and M.E. Piontek. Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations. Thousand Oaks, CA: Sage, 1996.
Torres, Rosalie T. "What Is a Learning Approach to Evaluation?" The Evaluation Exchange VIII, no. 2 (2002).
188
UNDP, United Nations Development Program. "Human Development Report." New York: Oxford Press, 1993.
Watkins, K., V. Marsick, and J. Johnson, eds. Making Learning Count! Diagnosing the Learning Culture in Organizations. Newbury Park, CA: Sage, 2003.
Weiss, C. H., ed. Ideology, Interest, and Information: The Basis of Policy Decisions. Edited by D. Callahan and B. Jennings, Ethics, the Social Sciences, and Policy Analysis. New York: Plenum, 1993.
Weiss, Carol. Evaluation. 2nd ed. Saddle River, NJ: Prentice Hall, 1997.
———. "Have We Learned Anything New About the Use of Evaluation?" American Journal of Evaluation 19, no. 1 (1998): 21-33.
———. Social Science Research and Decision-Making. New York: Columbia University Press, 1980.
Weiss, Carol H. Evaluation Research: Methods for Assessing Program Effectiveness. New Jersey: Prentice-Hall, 1972.
———, ed. Utilization of Evaluation: Toward Comparative Study. Edited by Carol H. Weiss, Evaluating Action Programs: Readings in Social Action and Education. Boston: Allyn and Bacon, 1972.
Wholey, J. S., H. P. Hatry, and K. E. Newcomer. Handbook of Practical Program Evaluation. 2nd ed. San Francisco, CA: Jossey-Bass, 2004.
Wigley, Barb. "The State of Unhcr's Organization Culture: What Now?" http://www.unhcr.org/publ/RESEARCH/43eb6a862.pdf
Williams, Kevin, Bastiaan de Laat, and Elliot Stern. "The Use of Evaluation in the European Commission Services - Final Report." Paris: Technopolis France, 2002.
189
Appendix B – Master List of US Based NGOs with an International Focus Complied from the IRS Exempt Database registry Date of Extract: January 4, 2006
# Organization Name # Organization Name
1 A Jewish Voice for Peace 247 International Center
2 Academy for Educational Development 248
International Center for Research on Women
3 Action Against Hunger 249 International Center in New York
4 Action Against Hunger (USA)
250International Crisis Group, Washington Office
5 ActionAid International USA 251 International Development Association
6 Adventist Community Services 252 International Diplomacy Council
7 Adventist Development and Relief Agency International 253
International Federation of Ophthalmological Societies
8 Advocacy Institute 254 International Forum on Globalization
9 Afghan Community in America
255International Healthcare Safety Professional Certification Board
10 Africa Action
256International Institute for Energy Conservation
11 Africa Faith and Justice Network
257International Institute of Rural Reconstruction, U.S. Chapter
12 Africa News Service 258 International Medical Corps
13 Africa-America Institute
259International Orthodox Christian Charities
14 Africa-American Institute - New York 260
International Pen Friends
15 AFRICALINK 261 International Relief and Development
16 African Community Refugee Center 262
International Relief Friendship Foundation
198
17 African Development Foundation 263 International Relief Teams
18 African Development Institute 264 International Rescue Committee
19 African Medical & Research Foundation, Inc. 265
International Rescue Committee - USA
20 African Medical and Research Foundation 266
International Rescue Committee-San Diego
21 Africare 267 International Rescue Committee-Seattle
22 Aga Khan Foundation U.S.A.
268International Research & Exchanges Board
23 Agri-Energy Roundtable
269International Social Service, United States of America Branch
24 Aid for International Medicine
270International Third World Legal Studies Association
25 Aid to Artisans
271International Visitors Council of Philadelphia
26 Air Serv International 272 Interplast
27 Alliance for Communities in Action
273Interreligious and International Federation for World Peace
28 Alliance for Southern African Progress 274
InterServe/U.S.A.
29 Alliance of Small Island States 275 Intervida Foundation USA
30 American Association for International Aging 276
Irish American Partnership
31 American Association for the International Commission of Jurists 277
Irish American Unity Conference
32 American Association for World Health 278
Japan External Trade Organization
33 American Civic Association 279 Japan Information Access Project
34 American College of International Physicians 280
Japan US Community Education and Exchange
35 American Committee for KEEP 281 Japan-America Society of Washington,
199
D.C.
36 American Committee for Rescue and Resettlement of Iraqi Jews 282
Jesuit Refugee Service/U.S.A.
37 American Disaster Reserve 283 Jesuit Refugee Service/USA
38 American Ditchley Foundation 284 Jewish National Fund
39 American Friends Service Committee 285
Just Act: Youth Action for Global Justice
40 American Fund for Czechoslovak Relief 286
Katalysis Partnership
41 American Ireland Fund
287Korean American Sharing Movement, Inc.
42 American Jewish Joint Distribution Committee 288
Lalmba Association
43 American Jewish Philanthropic Fund 289
Latter-day Saint Charities
44 American Jewish World Service 290 Lay Mission-Helpers Association
45 American Near East Refugee Aid 291 Liberty's Promise
46 American Peace Society 292 Life for Relief and Development
47 American Red Cross International Services 293
Los Ninos
48 American Red Cross National Headquarters 294
Lutheran Immigration and Refugee Service
49 American Red Cross Overseas Association 295
Lutheran Immigration and Refugee Service, North Dakota Chapter
50
American Red Magen David for Israel - American Friends of Magen David 296
Lutheran World Relief
51 American Refugee Committee
297Macedonian American Friendship Association
52 American Rescue Dog Association 298 MAP International
53 American Sovereignty Task Force 299 Mayor's International Cabinet
200
54 American Task Force on Palestine 300 Media Associates International
55 Americares Foundation 301 Mennonite Central Committee
56 AmeriCares Foundation Inc. 302 Mennonite Disaster Service
57 America's Development Foundation 303
Mennonite Economic Development Associates
58 AMG International 304 Mercy Corps
59 Amigos de las Americas 305 Meridian International Center
60 Ananda Marga Universal Relief Team 306
Minnesota International Health Volunteers
61 Angelcare 307 Mirrer Yeshiva Central Institute
62 Ashoka: Innovators for the Public 308 Mission Doctors Association
63 Asian Resources 309 Mobility International USA
64 Associate Missionaries of the Assumption 310
National Association of Catastrophe Adjusters
65 Association for India's Development 311
National Association of Social Workers
66 Association for the Advancement of Dutch-American Studies 312
National Coalition for Asian Pacific American Community Development
67
Association for the Advancement of Policy, Research and Development in the Third World 313
National Coalition for Haitian Rights
68 Association for World Travel Exchange 314
National Committee on American Foreign Policy
69 Association of Cambodian Survivors of America 315
National Committee on United States-China Relations
70 Association of Concerned African Scholars 316
National Council for International Visitors
71 Association of Third World Studies
317National Democratic Institute for International Affairs
72 Association on Third World Affairs
318National Disaster Search Dog Foundation
201
73 Austrian Cultural Forum
319National Memorial Institute for the Prevention of Terrorism
74 Baltimore Council on Foreign Affairs 320
National Peace Corps Association
75 Baptist World Alliance/Baptist World Aid 321
National Ski Patrol System
76 Board of International Ministries
322National Student Campaign Against Hunger and Homelessness
77 BorderLinks
323National Voluntary Organizations Active in Disaster
78 Bread for the World 324 Need
79 Brother’s Brother Foundation, The 325 New England Foreign Affairs Coalition
80 Brother's Brother Foundation 326 New Forests Project
81 Business Alliance for International Economic Development 327
New York Association for New Americans
82 Business Council for International Understanding 328
North American Center for Emergency Communications
83 CARE
329North American Conference on Ethiopian Jewry
84 CARE International USA 330 Northwest Medical Teams
85 Caribbean-Central American Action 331 Northwest Medical Teams International
86 Carnegie Council on Ethics and International Affairs 332
Open Society Institute
87 Carnegie Endowment for International Peace 333
Open Voting Consortium
88 Catholic Medical Mission Board 334 Operation Crossroads Africa
89 Catholic Network of Volunteer Service 335
Operation Smile
90 Catholic Relief Services 336 Operation U.S.A.
91 Catholic Relief Services (U.S. Catholic Conference) 337
Operation Understanding
202
92 Center for International Disaster Information 338
Operation USA
93 Center For International Health and Cooperation 339
Opportunity International-U.S.
94 Center for Migration Studies of New York 340
Oregon Peace Works
95 Center for New National Security 341 Organization of Chinese Americans
96 Center for Russian and East European Jewry 342
Organization of Chinese Americans - Central Virginia
97 Center for Taiwan International Relations 343
Organization of Chinese Americans - Columbus Chapter
98 Center for Third World Organizing
344Organization of Chinese Americans - Dallas-Fort Worth Chapter
99 Center for War/Peace Studies
345Organization of Chinese Americans – Delaware
100 Central American Resource Center
346Organization of Chinese Americans - Eastern Virginia Chapter
101 Centre for Development and Population Activities 347
Organization of Chinese Americans - Greater Chicago Chapter
102 Centre for Development and Population Activities, The 348
Organization of Chinese Americans - Greater Houston Chapter
103 Children International Headquarters 349
Organization of Chinese Americans - Greater Los Angeles Chapter
104 Children's Corrective Surgery Society 350
Organization of Chinese Americans - Greater Washington, DC Chapter
105 China Connection
351Organization of Chinese Americans - Kentuckiana Chapter
106 China Medical Board of New York
352Organization of Chinese Americans - New England Chapter
107 Christian Children’s Fund
353Organization of Chinese Americans - Orange County
108 Christian Children's Fund
354Organization of Chinese Americans - Saint Louis Chapter
203
109 Christian Foundation for Children and Aging 355
Organization of Chinese Americans - Silicon Valley Chapter
110 Christian Medical and Dental Associations 356
Organization of Chinese Americans - Westchester Hudson Valley Chapter
111 Christian Reformed World Relief Committee 357
Our Little Brothers and Sisters
112 Christian Relief Services 358 OXFAM America
113 Christians for Peace in El Salvador 359 OXFAM International Advocacy Office
114 Church World Service 360 Pacific Basin Development Council
115 Church World Service, Immigration and Refugee Program 361
PACT
116 Citizen Diplomacy Council of San Diego 362
Panos Institute
117 Citizens Development Corps 363 Partners for Democratic Change
118 Citizens Network for Foreign Affairs 364 Partners for Development
119 Claretian Volunteers and Lay Missionaries 365
Pathfinder International
120 Coalition for American Leadership Abroad 366
Pax World Service
121
Collaborating Agencies Responding to Disasters of San Mateo County 367
Peace Action
122 Columbus Council on World Affairs
368Peace Action Texas, Greater Houston Chapter
123 Commission of the Churches on International Affairs 369
People to People International
124 Commission on International Programs 370
People-to-People Health Foundation
125 Committee for Economic Development 371
Phoenix Committee on Foreign Relations
126 Committee for the Economic Growth of Israel 372
Physicians for Human Rights
204
127 Committee on Missionary Evangelism 373
Piedmont Triad Council for International Visitors
128 Committee on US/Latin American Relations 374
PLAN International
129 Concern America 375 Planet Aid
130 CONCERN Worldwide US Inc. 376 Planning Assistance
131 Conflict Resolution Program 377 Plenty International
132 Congressional Hunger Center 378 Pontifical Mission for Palestine
133 Consultative Group on International Agricultural Research 379
Population Action International
134 Consultative Group to Assist the Poor 380
Presbyterian Disaster Assistance and Hunger Program
135 Consumers for World Trade 381 Presbyterian Hunger Program
136 Council on Foreign Relations 382 Project Concern International
137 Counterpart - United States Office 383 Project HOPE
138 Counterpart International
384Rav Tov International Jewish Rescue Organization
139 Counterpart International, Inc. 385 Red Sea Team International
140 CRISTA Ministries 386 Refugee Mentoring Program
141 Cuban American National Council 387 Refugee Women in Development
142 Development Group for Alternative Policies 388
Refugees International
143 Diplomatic and Consular Officers, Retired 389
Relief International
144 Direct Relief International
390Research Triangle International Visitors Council
145 Disaster Psychiatry Outreach 391 Rights Action/Guatemala Partners
146 DOCARE International, N.F.P. 392 Sabre Foundation
205
147 Doctors for Disaster Preparedness 393 Salesian Missioners
148 Doctors of the World, Inc.
394Salvation Army World Service Office, The
149 Doctors to the World
395San Antonio Council for International Visitors
150 Doctors Without Borders 396 San Diego World Affairs Council
151 Doctors Worldwide 397 Save the Children
152 East Bay Peace Action 398 Secretary's Open Forum
153 East Meets West Foundation 399 Self Help International
154 East West Institute
400September 11 Widows and Victims' Families Association
155 East-West Center 401 Servas-U.S.A.
156 Edge-ucate 402 Seva Foundation
157 Educational Concerns for Hunger Organization 403
SHARE Foundation
158 Egyptians Relief Association 404 Shelter For Life International
159 Eisenhower Fellowships 405 Sister Cities International
160 El Rescate
406Society for International Development - USA
161 Episcopal Church Missionary Community 407
Society of African Missions
162 Episcopal Relief and Development 408 Society of Missionaries of Africa
163 Estonian Relief Committee 409 South-East Asia Center
164 Ethiopian Community Development Council 410
Southeast Asia Resource Action Center
165 Families of September 11
411Southeast Consortium for International Development
166 FARMS International 412 Spanish Refugee Aid
167 Federation for American 413 Student Letter Exchange
206
Immigration Reform
168 Feed the Children 414 Student Pugwash U.S.A.
169 Fellowship International Mission 415 Survivors International
170 Filipino American Chamber of Commerce of Orange County 416
Task Force for Child Survival and Development
171 Financial Services Volunteer Corps 417 TechnoServe
172 Floresta U.S.A. 418 Teen Missions International
173 Flying Doctors of America 419 The Hospitality and Information Service
174 Food for the Hungry 420 The International Foundation
175 Food for the Poor
421The Joan B. Kroc Institute for International Peace Studies
176 Foreign Policy Association
422The Russian-American Center/Track Two Institute for Citizen Diplomacy
177 Foreign-born Information and Referral Network 423
Third World Conference Foundation
178 Foundation for International Community Assistance 424
Tibetan Aid Project
179 Foundation for Rational Economics and Education 425
TransAfrica Forum
180 Foundation for the Support of International Medical Training 426
Trees for Life
181 Fourth Freedom Forum 427 Trickle Up Program
182 Fourth World Documentation Project 428
Trickle Up Program, The
183 Freedom from Hunger 429 Trilateral Commission
184 Friends of Liberia 430 Trust for Mutual Understanding
185 Friendship Ambassadors Foundation 431
Tuesday's Children
186 Friendship Force International 432 Tuscaloosa Red Cross
207
187 Friendship Force of Dallas
433U.S. Association for the United Nations High Commissioner for Refugees
188 Futures for Children 434 U.S. Committee for UNDP
189 GALA: Globalization and Localization Association 435
U.S.A. - Business and Industry Advisory Committee to the OECD
190 GeoHazards International 436 U.S.A. for Africa
191 Global Health Council
437U.S.-China Peoples Friendship Association
192 Global Interdependence Center 438 U.S.-Japan Business Council
193 Global Options
439Unitarian Universalist Service Committee
194 Global Outreach Mission 440 United Jewish Communities
195 Global Policy Forum 441 United Methodist Committee on Relief
196 Global Resource Services
442United Nations Development Programme
197
Global Studies Association North America
443
United Nations Development Programme - Regional Bureau for Asia and the Pacific
198 Global Teams
444United States Canada Peace Anniversary Association
199
Global Volunteers
445
United States Catholic Conference/Migration and Refugee Services
200 GOAL USA 446 United States Committee for Refugees
201 God's Child Project 447 United States-Japan Foundation
202 Golden Rule Foundation 448 Uniterra Foundation
203 Grand Triangle 449 Upwardly Global
204 Grassroots International
450US Committee for Refugees and Immigrants
205 Habitat for Humanity International 451 US Fund for UNICEF
208
206 Haitian Refugee Center
452USA for the United Nations High Commissioner for Refugees
207 Healing the Children 453 Visions in Action
208 Health Volunteers Overseas 454 Voices in the Wilderness
209 Heartland Alliance
455Volunteer Missionary Movement - U.S. Office
210 Hebrew Immigrant Aid Society 456 Volunteers in Technical Assistance
211 Heifer International 457 War Child USA
212 Heifer Project International 458 Washington Institute of Foreign Affairs
213 Helen Keller International 459 Washington Office on Africa
214 Henry L. Stimson Center 460 Water for People
215 Henry M. Jackson Foundation
461Weatherhead Center for International Affairs
216 Hermandad 462 Welfare Research, Inc.
217 Hesperian Foundation 463 Win Without War
218 High Frontier Organization 464 Windows of Hope Family Relief Fund
219 Hispanic Council on International Relations 465
Wings of Hope
220 Holt International Children's Services 466
Winrock International
221 Hope International
467Wisconsin/Nicaragua Partners of the Americas
222 Hospitality Committee 468 WITNESS
223
Humanitarian Law Project - International Education Development 469
Women for Women International
224 Humanitarian Medical Relief 470 Women’s EDGE
225 Humanity International
471Women’s Environment and Development Organization
209
226 Hungarian American Coalition 472 World Affairs Council
227 Idaho Volunteer Organizations Active in Disasters 473
World Affairs Council of Pittsburgh
228 Immigration and Refugee Services of America 474
World Bank Group
229 Indian Muslim Relief Committee of ISNA 475
World Concern
230 INMED
476World Conference of Religions for Peace
231 Institute for Development Anthropology 477
World Development Federation
232 Institute for Intercultural Studies 478 World Education
233 Institute for International Cooperation and Development 479
World Emergency Relief
234 Institute for Sustainable Communities 480
World Federation of Public Health Associations
235 Institute for Transportation and Development Policy 481
World Hope International
236 Institute of Caribbean Studies 482 World Learning
237 InterAction 483 World Medical Mission
238 Interaction/American Council for Voluntary International Action 484
World Mercy Fund
239
Inter-American Parliamentary Group on Population and Development 485
World Neighbors
240 Interchurch Medical Assistance 486 World Policy Institute
241 Intermed International 487 World Rehabilitation Fund
242 International (Telecommunications) Disaster Recovery Association 488
World Relief
243 International Academy of Health Care Professionals 489
World Resources Institute
210
244 International Aid 490 World Vision (United States)
245 International Bank for Reconstruction and Development 491
Worldwatch Institute
246 International Catholic Migration Commission 492
Worldwide Friendship International
211
Appendix C – Survey Population
# Organization Name # Organization Name
1 Academy for Educational Development
83 International Institute of Rural Reconstruction
2 Action Against Hunger (USA) 84 International Medical Corps
3 ActionAid International USA 85 International Orthodox Christian Charities
4 Adventist Development and Relief Agency International
86 International Reading Association
5 Advocacy Insitute 87 International Relief and Development
6 African Methodist Episcopal Church Service and Development Agency, Inc.
88 International Relief Teams
7 Africare 89 International Rescue Committee
8 Aga Khan Foundation U.S.A. 90 International Youth Foundation
9 Air Serv International 91 Interplast
10 Alliance for Peacebuilding 92 IPAS - USA
11 Alliance to End Hunger 93 Jesuit Refugee Service/USA
12 American Friends Service Committee
94 Joint Aid Management
13 American Jewish World Service 95 Keystone Human Services International
14 American Near East Refugee Aid 96 Latter-day Saint Charities
15 American Red Cross International Services
97 Life for Relief and Development
16 American Refugee Committee 98 Lutheran World Relief
17 AmeriCares 99 Management Sciences for Health
18 America's Development Foundation
100 MAP International
212
19 Amigos de las Americas 101 Medical Care Development
20 Baptist World Alliance 102 Medical Teams International
21 BRAC USA 103 Mental Disability Rights International
22 Bread for the World 104 Mercy Corps
23 Bread for the World Institute 105 Mercy-USA for Aid and Development, Inc.
24 Campaign for Innocent Victims in Conflict (CIVIC)
106 Mobility International USA
25 CARE 107 National Association of Social Workers
26 Catholic Medical Mission Board 108 National Committee on American Foreign Policy
27 Catholic Relief Services 109 National Peace Corps Association
28 Center for Health and Gender Equity
110 National Wildlife Federation
29 Center For International Health and Cooperation
111 ONE Campaign
30 Centre for Development and Population Activities
112 Open Society Institute
31 CHF International 113 Opportunity International
32 Christian Blind Mission USA 114 Oxfam America
33 Christian Children’s Fund 115 Pact
34 Church World Service 116 Pan American Health Organization
35 Citizens Development Corps 117 PATH
36 Citizens Network for Foreign Affairs, The
118 Pathfinder International
37 Communications Consortium Media Center
119 PCI-Media Impact
213
38 CONCERN Worldwide US Inc. 120 Perkins International
39 Congressional Hunger Center 121 Physicians for Human Rights
40 Conservation International 122 Physicians For Peace
41 Counterpart International, Inc. 123 Plan USA
42 Direct Relief International 124 Population Action International
43 Doctors without Borders 125 Population Services International
44 Earth Watch Institute 126 Presbyterian Disaster Assistance and Hunger Program
45 Educational Concerns for Hunger Organization (ECHO)
127 Project HOPE
46 Episcopal Relief & Development 128 ProLiteracy Worldwide
47 Family Care International 129 Refugees International
48 Florida Association of Volunteer Action in the Caribbean and the Americas
130 Relief International
49 Food for the Hungry 131 Salvation Army World Service Office, The
50 Freedom from Hunger 132 Save the Children
51 Friends of the World Food Program
133 SEVA Foundation
52 Gifts In Kind International 134 SHARE Foundation
53 Giving Children Hope 135 Society for International Development
54 Global Fund for Women 136 Stop Hunger Now
55 Global Resource Services 137 Support Group to Democracy
56 GOAL USA 138 Teach for America
57 Grassroots International 139 Transparency International - USA
58 Habitat for Humanity International 140 Trickle Up Program, The
214
59 Handicap International USA 141 U.S. Committee for Refugees and Immigrants
60 Hands On Disaster Response 142 U.S. Committee for UNDP
61 Heart to Heart International 143 U.S. Fund for UNICEF
62 Heartland Alliance 144 Unitarian Universalist Service Committee
63 Hebrew Immigrant Aid Society 145 United Methodist Committee on Relief
64 Heifer International 146 United States Association for UNHCR
65 Helen Keller International 147 United Way International
66 Hesperian Foundation 148 Water Aid America
67 Holt International Children’s Services
149 Weatherhead Center for International Affairs
68 Human Rights Watch 150 Winrock International
69 Hunger Project, The 151 Women’s Environment and Development Organization
70 Information Management & Mine Action Programs
152 Women's Commission for Refugees
71 INMED Partnerships for Children 153 World Concern
72 Institute for Sustainable Communities
154 World Conference of Religions for Peace
73 Institute of Cultural Affairs 155 World Education
74 InterAction 156 World Emergency Relief
75 International Aid 157 World Hope International
76 International Catholic Migration Commission
158 World Learning
77 International Center for Religion and Diplomacy
159 World Rehabilitation Fund
78 International Center for Research 160 World Relief
215