Start Where You Are: Using Program Data to Learn, Manage & Report

Editor’s note: My colleague Deb Stephenson, an advisor to nonprofits on the design and implementation of information systems and a genius at diagnosing and solving technology problems, and I co-authored this post.

With all the talk about nonprofits needing to prove their impact and stories about “big data”, pressure to keep up is prompting some groups to acquire sophisticated program management systems such as Salesforce and ETO. If they leapt before looking at their actual data needs, chances are these groups will find themselves saddled with an overly complex system their staff has neither the skill nor the motivation to master. We frequently hear about these powerhouse systems lying idle, never having been fully implemented, the trained system administrator long gone.

In this post, we offer suggestions for thoughtfully determining your organization’s program management and reporting needs BEFORE even discussing technology tools. In our respective practices—Eleanor’s evaluation consulting and Deb’s information technology work–we find that groups that follow the “Goldilocks Principle” come out ahead. These are the ones that, before considering a particular tool, ask such questions as: What do we really need data to tell us? For which purposes and audiences will we use the data we collect? What kind of system would fit our organization’s capacity, infrastructure and culture “just right”? Which tool would enable us to meet current needs and gradually grow our data capacity?

Eleanor’s recent post “Talking About what Works” introduced the concept of a “Continuum of Rigor” for program evaluation. She suggested an organization not try to undertake rigorous outcomes evaluation early in their data collection journey. Rather, it should adopt a phased approach to gradually build internal evaluation capacity, starting with simple monitoring of program inputs and outputs. By the same token, an organization needs to start small and appropriately phase in its program management technology.

In a completely different context, the Reverend Martin Luther King, Jr., eloquently described the importance of getting started in a feasible and appropriate way:

“If you can’t fly, then run.

If you can’t run, then walk.

If you can’t walk, then crawl.

But whatever you do, keep moving.”

We have seen over and over that organizations that develop their program evaluation and data collection and analysis competencies one step at a time are the ones best able to identify the elements of their program that are working well and those requiring attention. As these organizations move along the evaluation continuum, their data collection and reporting needs evolve, and more robust technology tools make sense. Instead of responding to peer or funder pressure to implement a “hot” new system or one recommended by a salesperson, these groups pursue new technology based on a clear understanding of what will best suit them.

Step One for Groups with No Data System

You may not think of it as such, but your organization really does have a data system. Even if it’s not uniform nor officially recognized, staff members have developed a set of tools—a system–for managing their work. It could be Post-It notes, a spreadsheet, calendar, email, texts, a database or any number of online apps. They use these tools to track their inputs and outputs, including number of sessions or classes, attendance and/or which client got which service and so forth.

Don’t be afraid to use “anecdotal” or casual data to start. One nonprofit Deb worked with lacked a systematic way to record and code reasons for client absences. When staff members mentioned they wondered if transportation challenges were responsible for a recent spike in absences, she suggested they do a quick email and text search on the words “late” and “bus” to compare results over time. The searches produced over 40 results in the previous month, but only 10 results in earlier months. Their “hunch” that transportation was a growing problem was now validated by “data”. As a consequence, the group was able to attract additional resources to solve the transportation problem.

Another thing you can do right away is to schedule regular time for staff to pause and reflect on the data they need to do their jobs and to talk about the work-arounds each person has developed to obtain needed information. Allowing such reflection time enables you to identify “bright spot” solutions that might be replicated efficiently across the organization.

Be creative and thoughtful about the ways program staff already use data to do their work and what might be even more effective strategies for ensuring regular and timely access to needed information.

Avoid Collecting Unnecessary Data

 A common pitfall on a group’s monitoring and evaluation journey is the urge to collect more and more data. The following points indicate that giving in to such an urge can lead one down dangerous side paths.

  • Insights and action don’t come from data itself; they come from discussions about data. Resources used to collect data should be balanced with those used to gain meaningful insight.
  • Increasing the quantity of data often leads to problems with the quality of data. Significantly greater time and resources will be needed to “clean the data”.
  • Other challenges rise along with the amount of data collected. Don’t needlessly create potential new sources for a data security breach.
  • When staff spend time collecting data they find neither useful nor meaningful, it causes an enormous drag on morale.
  • Thought and discipline are required to establish metrics that are useful in terms of standardization, compatibility and portability that will be important if you plan to invest in more sophisticated evaluation efforts. One good reference is the National Information Exchange Model.

Moving Along the Evaluation and Technology Continuum

 Now that you have developed simple, practical data collection and analytical solutions and your team regularly reflects on data uses and needs, how will you know it’s time to move to the next step on the program evaluation/technology continuum? Rather than passively complying with funder demands for specific outcomes data they want, be intentional about the kind of outcomes data you and your staff want and need. Think about your own experience observing and listening to your clients and ask the following questions as well as others: What do clients tell you—in both words and actions–about being in your program? Do they refer friends and relatives? Does their attendance drop off after a few weeks? In what ways and for what uses would it be helpful to better understand clients’ motivations for participating and the changes happening in their lives?

Your organization will know it’s time to move to the next phase when staff:

  1. Routinely ask thoughtful questions of their program data
  2. Take action based on what is learned and
  3. Realize their ability to effectively manage the program and continually improve clients’ experiences and outcomes depend on the availability of new data.

Another indicator that it’s time to seek a technology solution to your data challenges is when sharing of best practices among staff members begins to yield a common “wish list” for software supports.

Remember, it’s critical to “measure what you value,” as noted in “What is Your Sacred Bundle?” If you attempt to value –and measure–everything, the result is a ton of data, none of which anyone truly cares about.

P.S. A new short guide from ETO maker Social Solutions addresses challenges of scaling successful programs, emphasizing the importance of phasing or staging program implementation to ensure strong, consistent results. Producing Consistent Outcomes to Ensure Successful Interventions” (free after registering)

-Eleanor Smith & Deb Stephenson

Seedling photo copyright maxsattana