Don’t Bite Off More Than You Can Chew: Tips for Practical Performance Measurement Using Pre/Post Surveys

Photo bu nuchylee, freedigitalphotos.com

My firm has worked with a number of small nonprofit organizations that wanted help figuring out how to collect and analyze program outcomes data. They seek answers to such questions as: “Are our programs really working?” and “Are participants being helped in the ways we intend?” Most request help “creating a survey”, which, on the surface, seems like a straightforward endeavor.

We have found that many nonprofit leaders don’t seem to understand that figuring out ways to gather data to answer such questions is a much bigger undertaking than the design of a simple questionnaire. The decision to start measuring program performance carries with it implications for the entire organization. If a group has not systematically gathered program outcomes data before, going down this road means “Alert: Big Changes Ahead!”

Serious new capacity is needed to not only collect program outcomes data, but also to analyze and make meaning of it.

Critical Questions to Ask Before Launching A New Survey /Performance Mgt. System

We have developed a set of critical questions to guide nonprofit leaders who are considering starting a new system for internal performance management (aka "performance measurement" and "internal evaluation") or who want to shore up existing efforts with a participant outcomes survey. Gleaned from our experience, these questions include:

1. Who in your organization has both the skills and bandwidth to take the collected survey data (either paper or the data from an online survey tool such as SurveyMonkey) and turn it into “survey results” for purposes of board reports, grantwriting, grant reporting, annual reports, marketing and other uses? Are these skills already on the staff? On your board? Will you need to train someone to do it? Or will you need to contract out for the data analysis?

2. If you’re planning to do everything in-house, what technology tools does your group have to conduct data analysis? Think about the kind of data and format for the findings that you need beyond what the online tool supplies. For instance, if you are analyzing before-program and after-program data, how will you figure out the pre/post changes in the survey responses? To date, the online tools don’t allow one to compare pre/post data, only a single survey at a time; before and after comparisons must be done in Excel or a database program with analytical capabilities; best option is to use a statistics program such as STATA or SPSS (or a free program such as R).

3. Where will the paper or e-survey files be stored and how (and by whom) will they be accessed? Who will manage the files and ensure they are appropriately labeled (in a standardized way), stored and retrieved easily by all users?

4. Are you wanting or planning to collect more data than you have the ability to analyze on an ongoing basis? Or more data than you need for the specific uses you need data? Plan carefully as collecting more data than can be easily analyzed and used is a waste of precious resources, including your time and your participants’ time.

5. How will you standardize the format of questions over time for the inevitable situation down the road when you need to change a question or a set of answer choices--or when you want to measure something completely new?

6. Have you figured out how you will glean important lessons from the survey data and apply them to improve program offerings? If you are going to the considerable trouble of collecting and analyzing data in-house, build in processes (and incentives) for learning from the “negative” findings to identify ways to better manage and/or re-design the program based on the data. Also consider soliciting feedback from your participants about the findings to add context and explain some of their answers. If you seek input from a wide range of people, both inside and outside the organization, you will gain useful insights and perspective about the importance of the different findings.

7. How will you balance the need for easy survey administration with the need for straightforward data analysis? Many groups new to internal program measurement err on the side of simplifying survey administration, forgetting data quality on the back end. Poor data quality (such as inconsistent answer choices, inconsistent participant matching and so forth) creates a lot of hassles such as additional time and effort to “clean” the data. It can also erode the data’s reliability.

Tips for balancing ease of survey administration with ease of survey data analysis:

a) Use identification numbers of participants for before and after surveys to avoid the issue of inconsistent spelling of names (and to preserve anonymity of the data, especially important with minor children whose parents must provide written consent before their children’s non-anonymous data can be collected).

b) Use mostly close-ended questions, such as multiple choice (e.g., a-d answer choices or drop down menus) and Likert Scale questions for a numerical range answers (e.g., Strongly Disagree to Strongly Agree). Collect a limited amount of open-ended data, such as: “Tell us about your experience of our program” or “Name 3 things you will do weekly now that you didn’t do before attending our program”.

c) Keep surveys short, typically up to 10 questions. Be strategic about what data you absolutely must have to accomplish what you need to with the data. Focus on creating short questions that measure your desired outcomes and are required for your intended use of the data.

d) Store survey questionnaire files, answer keys and survey data files in the same folder, one for each time (year, quarter, etc.) the survey is administered. Label files consistently. Be sure to label files with dates of question changes.

These simple steps will go a long way to ensuring data quality and more reliable survey results.

Thanks to Smith & Associates Associate Lauren Baehner for adding her tips above