Every few months, I read an article or blog post about “evidence-based practices”—and particularly about what constitutes “evidence”. Last year, I read a provocative two-part essay in Stanford Social Innovation Review (SSIR) arguing that nonprofits should NOT evaluate their own work and should leave the creation of “evidence” to the experts. Continue reading Image courtesy of Stuart Miles at FreeDigitalPhotos.net
When Rigorous Evaluation Isn’t Feasible
A small Bay Area nonprofit seeking to move immigrant Latino families out of poverty sought my help with an internal performance measurement system to enable them to compare key client characteristics before and after participation.
Lacking methodological rigor, this approach does not determine whether observed pre-post differences are the result of a program (see above blog post). But it can document changes in indicators of interest between the time clients start a program and after they’ve participated awhile or completed it—changes which might be caused by the program or by something unrelated.
Outcomes Measurement Made Easy
To make the internal measurement system practical, I helped the group’s leaders choose three top-priority outcomes. For each outcome, they selected indicators that the evaluation literature, the group’s key funders and their own prior assessments determined were significant metrics of family well-being. We developed a logic model and interview questions in English and Spanish, along with a plan to non-intrusively administer the interview protocol at two time points. We avoided open-ended questions because they are more time-consuming to analyze and turn into statistics without specialized software. After an enlightening user test, a couple questions were modified.
As a final deliverable, I provided a template for staff to use to track their findings. After a few years, they will have information about the “dosage” (time in program) associated with positive change on each indicator. Along with focus groups, the pre-post interviews will provide important feedback on ways to improve the program for client families.
California Nonprofits’ Economic Power
CalNonprofits’ new Causes Count, the first report on the economic impact of California’s social sector, is making quite a splash. I attended a fascinating panel discussion of funders at San Francisco’s Foundation Center. After a presentation of the study’s highlights, the funders talked about how they intend to use the report’s findings. The statewide foundation leaders said they’d use them to direct more resources to the fastest-growing regions of the state whose nonprofits are vastly under-resourced. Download the report here
Theory of Change Workshop Coming Up
I’ll teach an Introduction to Theory of Change planning at Santa Rosa’s Volunteer Center of Sonoma County, on December 9, 2-4 p.m. (fee) Hope to see some of you there!
Please keep in touch as you consider how to assess your programs and to use data to improve them. I’d love to hear about your successes and challenges.
Warm wishes for a productive and healthy autumn! Eleanor
1. SSIR-FSG podcast of a July 2014 panel discussion with foundation and other social sector thought leaders discussing Next Generation Evaluation (particularly, shared measurement, “big data” and developmental evaluation): http://www.ssireview.org/podcasts/entry/implications_for_the_social_sector?utm_source=Enews&utm_medium=Email&utm_campaign=SSIR_Now&utm_content=Title
2. Idealware 2011 article reviewing several online survey tools, such as Survey Monkey, some of which have free or low-cost subscription options http://www.idealware.org/articles/fgt_online_surveys.php
3. TRASI (Tools and Resources for Assessing Social Impact), developed by the Foundation Center with McKinsey, lists more than 150 tools and approaches with short summaries and download links http://trasi.foundationcenter.org