Heard of Moneyball for Government? If not, you might want to know about this growing movement encouraging policymakers to rely more heavily on the use of data and evidence to make funding decisions–like the Oakland A’s portrayed in the 2011 film began using statistics to choose their team.
The folks behind Moneyball for Government include such heavyweights as former New York City mayor Michael Bloomberg and the former directors of Barak Obama’s and George W. Bush’s Office of Management and Budget. Along with other White House veterans, in 2012 they founded Results for America (RFA) “to improve outcomes for young people, their families, and communities by shifting public resources toward programs and practices that use evidence and data to improve quality and get better results.”
Recently RFA partnered with The Bridgespan Group to conduct research on the suppliers of “evidence” of effective social programs, the users of such information and the infrastructure connecting suppliers with users. Published in April, the fascinating report, “The What Works Marketplace: Helping Leaders Use Evidence to Make Smarter Choices,” can be downloaded for free on Bridgespan’s site.
Like any market, the “What Works Marketplace” consists of a “supply side” and a “demand side”:
- The supply side of this marketplace consists of clearinghouses and other repositories of “proof” (mostly rigorous evaluation studies) that certain programs are effective in reducing the social problems they address. (See a few listed on my website here—link to Eval Resources page)
- The demand side consists of government, philanthropic and private sector decision makers who use “what works evidence” to choose which social interventions to invest in, as well as other decision makers who want use this evidence more often.
Why This Market Matters
The report’s authors feel strongly about the moral imperative to spend public dollars wisely: “As a nation, we owe it to all citizens to invest our resources in the most effective solutions to the problems we face.”
Their research concludes that insufficient proof about what works exists, the proof that does exist is not easily accessible to decision makers when they need it and, as a result, the data is not often used. “Less than 1 percent of federal government spending is backed by even the most basic evidence of impact. It may be that many government programs are working. We just don’t know,” the report states.
Although they want policymakers to use data from the what works marketplace to make funding decisions, the authors argue against its punitive use: “The purpose of such a market is learning and continuous improvement. It is not the separation of interventions into two neat categories of those that work and those that don’t work. Our research shows that effectiveness is far more nuanced and constantly evolving…. Judging interventions could discourage innovation because of fear of negative repercussions.” It’s not entirely clear how such a separation is to be avoided, however.
Major Gaps in the What Works Marketplace
Key findings of the report include:
- It is not easy to identify the most effective solutions to social problems.
- Both supply and demand for evidence on effectiveness are growing, but, not surprisingly, there are growing pains.
Six major gaps in the marketplace prevent supply from effectively meeting the requirements of demand, among them:
- Comprehensiveness. Decision makers want information on a broader range of interventions with varying levels of effectiveness. And they want to know which interventions have not been reviewed or rated.
- Implementation. Decision makers want information about interventions beyond evidence of impact—including peer experience implementing the intervention—to help them make informed decisions. Few clearinghouses provide this level of information.
- Usability. Users do not find clearinghouses easy to use, nor do they understand the differences between them.
Implications for Nonprofits
It behooves nonprofits seeking support from government and a growing number of “data-driven” foundations and corporations to document the existing evidence of their programs’ effectiveness. See my tips on conducting a literature review to do this.
In the absence of such data, groups have tough choices to make about incorporating “Evidence-based Practices” into their programming and accessing the considerable expertise needed to conduct a rigorous impact evaluation of their as-yet un-validated solution.
Read the rest of the report and the authors’ recommendations here.