When an organization’s leaders embark on the journey to systematically seek feedback from participants about their experiences of a program, they have made a decision—conscious or not—that business as usual is about to change. For the new internal program evaluation system to succeed, the group may need to acquire new resources, but for sure it’s going to need a learning mindset.
As I wrote in a previous post—“Don’t Don’t Bite Off More Than You Can Chew”–the decision to start collecting and analyzing program evaluation data carries with it implications that will ripple throughout the whole organization. Leaders need to consider the changes that ongoing internal evaluation will create, both big-picture and in-the-weeds. They must prepare the ground, involving both program and administrative staff, in advance. Wise leaders will develop an inclusive and thoughtful process before they start to put into place new systems, practices, personnel and tools. That process will foresee, allow for and accommodate the powerful human and organizational impulses to resist change.
Change management leadership is needed to steer the organization in new directions, to reallocate resources, modify existing expectations and job descriptions and gradually shift organizational culture to support the new course. This means the group’s leaders must be able to predict obstacles and plan for them and to model a willingness to fail and learn from missteps. They also need to validate for everyone that it’s probably going to be a bumpy ride for a couple of years and it’s okay to be uncomfortable and raise concerns along the way. (See Gates Foundation case study on how NOT to manage evaluation-related change.)
To get a handle on why organizations shift their way of doing business to support evaluation and learning, let’s take a brief side road to look at the context influencing this kind of change.
Motivation to Start Evaluating Programs
For the past two decades, the social sector has seen a growing emphasis on learning about and broadly disseminating “what works” to solve intransigent environmental and social challenges. This desire spurred the smart philanthropy movement and its call to collect and use evidence from rigorous evaluation research by social scientists to drive policies and investment toward interventions that demonstrably move the needle in early childhood education, criminal justice reform and climate change advocacy, to name just a few areas.
This movement has had both positive and negative consequences. To my mind, a positive consequence is that we now know more about effective strategies to solve tough issues that previously seemed intractable. Another is that both foundations and the groups they support are more focused on identifying the impact they hope to create, which makes them more intentional and clear and better aligned and able to achieve it. They are also more committed to tracking their progress and, in the process, learning ways to better serve their communities and realize their mission.
On the flip side, however, is a tendency of some nonprofit organizations–particularly those just starting their evaluation journey–to undertake the voyage in order to support fundraising, rather than program management or improvement. This is understandable, since most small or new groups must keep their eyes on ensuring a steady stream of donations and grants in order to survive. They are simply responding to the philanthropic market’s demand for proof that their program works. But this narrow motivation can be problematic, for a couple of reasons.
Many funders and donors don’t appreciate that most of their grantees lack the requisite research and analysis skills to “prove” that even carefully observed participant outcomes are the product of their activities or services. As I’ve written about before, it takes much more than a pre- and post-program survey to prove causation; it requires a research design that controls for “counterfactuals”–all the other possible explanations for observed outcomes. Due to power differentials, however, it can be tough for nonprofit leaders–especially those in small or newer groups–to explain the practical limits of their program evaluation efforts to their funders (see “Disconnect Between Funders & Grantees on Evaluation”).
There’s a further reason a tight link between evaluation and fundraising can be troublesome. Some nonprofit leaders and staff assume that the data they collect will invariably show that their programs are hitting their mark and that funders and donors will support them once they have “the data.” One new client told me they “need evaluation to help sell their programs.”
I sympathized with her motivation and did my best to explain the benefits of being able to track both positive and negative data. My concern was that the considerable effort to set up and maintain a new evaluation system might not be used in service of program improvement. When groups are motivated to undertake evaluation simply to attract funding, they may miss signals in their data that a program is not working as intended and, thus, fail to learn how to improve it so that more participants benefit or so that participants benefit more. The key word here is “learn.”
Getting Back to Ways to Grow Organizational Capacity to Learn
In my work, I’ve observed that the most important factor underlying an organization’s learning culture is leaders who are both deeply curious about the process of social change and intensely committed to making demonstrable progress. Of course, the funders such leaders partner with must also be open to learning from their own and grantees’ mistakes in order to encourage risk-taking and change. But what else might bolster a philanthropist or nonprofit leader to become more willing to undertake evaluation as learning? I’m curious about “bright spot” policies and strategies that have shifted organizational incentives away from evaluation for fundraising towards evaluation as a means to build stronger programs. If you know any, please share! I’m sure I’ll revisit this topic again later.
Meantime, I’ll end with a few take-aways from an October AEA webinar, “Organizational Capacity to Do and Use Evaluation: Insights From a Research Program,”presented by University of Ottawa (Canada) researchers Brad Cousins and Hind Al Hudib:
- Context matters when considering evaluation capacity, both within the program itself, within the larger organization and within the social, economic and political context the organization inhabits [as the discussion above suggests]. In other words, what in the organization’s context are motivating stakeholders to engage in program evaluation? This is the incentives question: Is it solely to provide accountability to funders and to secure funding? Or do stakeholders want to learnhow they can better deliver their programs and services to increase their social impact?
- In the researchers’ study of evaluation capacity in Canadian organizations, there was a much greater emphasis on accountability within governmental entities than within voluntary sector (nonprofit) organizations, which were more motivated to learn ways to better serve their intended beneficiaries.
- A key to building “learning capacity” is this tip for evaluation change management: identify and celebrate incremental “wins”—concrete examples of ways evaluation data have led to program improvements and/or other organizational benefits. The group must come to value evaluation for intrinsic reasons. For instance, managers may have learned from evaluation data that part of their program was causing unintended negative consequences for beneficiaries and were then able to mitigate the problem because they had data to support the needed change (and supportive funders and donors).
- The presenters shared research on tools (such as the Evaluation Capacity in Organizations Questionnaire, to systematically assess the role of evaluation in organizational development, particularly organizational learning capacity. Abstract is free; full article available for a fee on Science Direct
I’d love to hear your thoughts on supporting a nonprofit’s capacity to both conduct and learn from program evaluation.