Making Program Evaluation More Accessible to Nonprofits

Easy-HardButtonsI just finished teaching a 2-evening Program Evaluation course at California State University East Bay. This month, I engaged students in a quasi-debate fueled by a 2-part SSIR essay arguing that nonprofit service providers should NOT evaluate their own program outcomes. Because they lack social science research expertise and face conflicts of interest in reporting objective findings, nonprofits instead should learn about what’s been proven to work in their area (based on rigorous evaluations) and implement these “evidence-based practices” faithfully. In implementing EBPs, they should monitor outputs as a way to bolster program delivery. “Leave program evaluation to the experts and focus on delivering good programs” is the author’s clear message. See my post about these essays last fall.

As I predicted, the essays prompted lively discussion. Students said that their mission was to serve their communities—“not conduct research”. Many acknowledged it was unlikely their groups would report negative findings to stakeholders. Some did not like the idea, however, that they should diligently follow “a recipe”–deliver a proven program model imported from another place. A couple said they would keep doing what they can—within reason–to collect before and after data on their program participants’ outcomes. There was a lot of disagreement on this issue among my students–just as there is among philanthropists and service providers in the real world of nonprofit program evaluation and performance management.

A Possible Solution

A Philadelphia consulting firm has created an affordable and accessible online evaluation system that leverages “big data” on what works. Algorhythm’s new iLearning Institutes use a user-friendly interface to make it easier for nonprofits to conduct outcomes evaluation and, down the road, for funders to predict the likelihood of success of their grantees’ programs.

The iLearning Institutes seek to enable service providers in several areas, such as youth development and capacity building, to conduct “quasi-experimental design” outcomes research, using pre- and post-surveys that are based on research-validated tools.

Here’s how it works: After subscribers enter details about a program and its participants, they administer online surveys to both the staff and participants. The system then provides statistical and narrative analyses tracking their program’s outcomes to social science research on similar programs’ “effective practices” (“big data”). Staff and participant perceptions about the program are compared side by side to shed light on differences and to suggest areas for further examination. Subscribers also get a set of recommendations on ways to improve their program based on the evaluation in their field.

The Algorhythm folks say the draw for funders is the system’s “predictive analytics”—its ability to predict program outcomes and impacts by comparing a program’s inputs and outputs to the evaluation literature. A group can get started with an annual subscription based on the number of participants it wants to collect data from. For 50 participants, the cost is $750 a year–much less costly than hiring an evaluator or using a web-based system such as Efforts to Outcomes.

When I first learned about Algorhythm’s iLearning system, I thought, “This sounds like a huge step forward.” I like how it appears to marry evaluation big data with easy-to-use tools to administer outcomes surveys and receive real-time analyses along with research-based program recommendations—all without having to become a social scientist or find the big bucks to hire one. I urge you to check it out and tell me what YOU think.

Image courtesy of Stuart Miles, Free Digital Photos