It seems I see a new blog post or article every day on Randomized Control Trials, Evidence-based Practices and what kinds of program evaluation yield acceptable “proof” of impact. A provocative two-part essay in Stanford Social Innovation Review (SSIR) last year argued that nonprofits should NOT evaluate their own work and should leave the creation of “evidence” to the experts. (The essays’ author Caroline Fiennes is the director of Giving Evidence, a consultancy promoting charitable giving based on evidence; she sits on the boards of The Cochrane Collaboration, Center for Effective Philanthropy and Charity Navigator.)
Performance Measurement vs. Impact Evaluation
Author Fiennes states that a fundamental difference exists between a charity’s ability to monitor the implementation of its program (program performance) and its capacity to evaluate the efficacy of the program’s model or “idea”. The former activity is straightforward and does not require special skills. So, in her view, performance measurement can and should be done in-house. In contrast, the latter is social science research requiring impartiality and advanced knowledge and skills.
To prove that a certain program caused the outcomes observed, Fiennes believes that a randomized control trial or RCT must be conducted by social scientists with subject and methodological expertise. Called the evaluation “gold standard,” RCTs are very costly, often ethically dubious and always time-consuming.
Could “Evidence” Be Broader than RCTs?
Others question whether the RCT should retain its title of “gold standard”. Could other kinds of evaluation provide more timely and “good enough” proof that a social program is effective? This discourse is not limited to academic circles. The Pay for Success and Social Impact Bond movements are fast gaining traction; government, foundation and financial market investors, as well as social program leaders need answers to these questions right now.
Another thought-provoking SSIR article (Fall 2012) talks about the debate between “the Experimentalists” (which Fiennes could be considered) and “the Inclusionists”. Author Lisbeth Schorr, a self-professed “Inclusionist”, argues for a much broader definition of evidence, citing RCTs’ cost and practical considerations as well as the urgency to implement effective social programs. She puts forth four principles she believes the two sides agree are critical for success.
1) Begin with a results framework, such as a theory of change.
2) Match evaluation methods to their purposes.
3) Draw on credible evidence from multiple sources.
4) Identify the core components of successful interventions.
I’ve been encouraging my clients to do these things for years. Below I’ll touch on Schorr’s items #3 and #4.
Check Out What We Already Know About What Works
For those planning and operating social sector programs, it’s imperative you become familiar with the interventions that both rigorous evaluation and the best practices literature have shown are effective in your area of work. Then take the next step and incorporate these practices into your program.
A growing number of online searchable databases of “evidence-based models” or EBPs can assist you in finding pertinent published studies. (See Evaluation Resources here) Interventions addressing everything from school readiness and healthcare disparities to prisoner re-entry have been evaluated systematically and repeatedly. The findings of these studies provide easily accessible evidence to groups using the same program model.
The sector’s thought leaders and investors in social programs are buzzing about “EBPs”. If you can demonstrate you are using one, you are eligible for funding not available to “unproven” programs or ideas.
I’m currently helping a nonprofit develop a clear case for its gang prevention program to be included on the website of Sonoma County Upstream Investments. The site is a repository of evidence-based programs used by funders and donors to identify and support effective, impactful interventions.
The Upstream Portfolio consists of three levels of EBPs: Tier 1: Evidence-based Practices; Tier 2: Promising Practices; and Tier 3: Emerging Practices. We are preparing a Tier 3 application for the existing program that uses three different EBPs in novel ways. We will submit a literature review, logic model/theory of change and a detailed evaluation plan (Oy Vay!).
What do YOU think nonprofits should do in response to stakeholder demands for “evidence” that their program is effective? Send me an email and let me know your thoughts.