How Widely Do You Share Your Evaluation Results?

This question intrigues me because of how much it implies about an organization’s culture and values around learning, failure and transparency.

In my Program Evaluation classes at CSUEB’s Nonprofit Management Program, students engage in a mock debate about whether it’s better for nonprofits to do their own program evaluation or to bring in external evaluators to assess their program’s effectiveness. Students offer numerous arguments on each side of this question. Considerations such as staff’s research and analysis skills, the steep cost of external evaluators and a host of organizational readiness issues are raised to make the case pro and con.

Students arguing in favor of external evaluation note the risk of inadvertent or even intentional bias in internal evaluation. Will nonprofit staffers introduce bias into their data collection because they aren’t skilled enough in ways to avoid it? Will those analyzing the data—and reporting it to stakeholders such as funders and donors–filter out the less-than-glowing findings to make the program appear more effective than the data indicate? In my classes, we don’t typically land on a one-size-fits-all answer to the question. I use the exercise to illustrate the very real practical and ethical concerns around conducting internal evaluation, which is what the vast majority of students come to my course to learn.

I came across a fascinating article about evaluation transparency that speaks to the huge risks most nonprofits face in sharing negative findings outside their organization. In the November 29, 2018 issue of Vox, Eshauna Smith describes the bold—and highly unusual—decision of international anti-poverty incubator Evidence Action to let its donors, funders and other stakeholders know that a 2017 rigorous external evaluation of one of its programs, No Lean Season, had found that it was not working as intended—despite prior evidence of effectiveness.

In a blog post on its website, Evidence Action staffers wrote, “Consistent with our organizational values, we are putting ‘evidence first,’ and using the 2017 results to make significant program improvements and pivots… We are continuing to rigorously test to see if program improvements have generated the desired impacts, with results emerging in 2019.”

The post added, “Until we assess these results, we will not be seeking additional funding for No Lean Season.”

Think about that. It is exceedingly rare for a nonprofit to: a) publicize research that shows one of its programs failed to benefit the people it was designed to serve, and b) commit publicly—before seeking further support for the program—to learning exactly why didn’t work, fixing the problems and then investing in more evaluation to prove the new program is now worthy of investment.

Why Is This Kind of Transparency So Rare?

There are many reasons the story about Evidence Action and its program No Lean Season is so unusual.

First, Evidence Action is an outlier in the social sector. It was founded specifically to study, scale and then run promising programs. Failure—and learning from it–are baked into its structure and culture. The group incubates programs and, when shown to be promising, scales them and then again rigorously evaluates them to let stakeholders know where their investments are most likely to have the greatest social impact. If, as in the case of No Lean Season, the evaluation shows the program at scale isn’t working, it iterates it further and tests it again. This is Evidence Action’s mission.

Another key reason this level of evaluation transparency is rare is the fear of losing money or even of going out of business.

As my Program Evaluation students always mention, it is financially risky for a small or even mid-sized nonprofit to share disappointing program evaluation findings with key stakeholders, many of whom are in a position to use the information to withhold needed support. The group’s leaders fear that a loss of funding will impact their ability to serve the people their program was designed to benefit.

Funder/grantee relationships come into play here. Wishing it weren’t so doesn’t change the fact that most are based on uneven levels of power, as well as lack of support for program evaluation (See my 2015 post “Disconnect Between Funders & Grantees on Evaluation”) Unless incentives change and the nonprofit and its funders/donors have built an exceedingly high level of trust in each other, the organization may not be able to access the capital it needs to dissect disappointing evaluation results and use them to develop strategies that strengthen a program—and ultimately create more benefit to the program’s target population.

What Might Be Done to Increase Mutual Trust and Transparency?

Building two-way trust between philanthropists and nonprofits involves addressing the complex web of social, political, economic and organizational issues that impedes it. Not only is the current system of grantseeking and grantmaking set up to disincentivize deep trust, everything from donors’ overemphasis on administration to program ratios, scarcity mindsets, legacy organizational cultures, institutionalized injustice and racism and leadership attributes converge to produce powerful forces that maintain the status quo.

As countless writers and sector leaders have argued, despite the enormous obstacles to challenging how things are, philanthropy and nonprofits alike must begin to shift organizational norms and values in favor of tolerating greater risk in order to learn from disappointment or failure. It is the only way to figure out how to do things better. A bright spot is the recent GuideStar report that found organizations that had posted program and financial details on their public GuideStar profiles were stronger in many ways than those that had not. (Read more)

A Personal Note

In my work, I prize honesty and transparency with my clients. When a consulting project hasn’t turned out as originally hoped, I invite the client to share with me why they weren’t completely satisfied and what they think I could have done better. By the same token, after each of my classes, I review my students’ feedback carefully so I can improve my courses for the next time. It’s humbling to receive negative feedback. But since I have a deeply held value to stay open to ways that I can be more effective in my consulting and teaching, I am willing to listen, learn and change.

If you have thoughts about sharing evaluation findings and/or receiving disappointing feedback, I invite you to get in touch and share them.