Examining Cultural Bias in Evaluation

Over the past few years, I’ve been called to examine the ways my cultural, racial and socio-economic background influences the work I do with nonprofit staff and boards, especially in program evaluation projects. I’d like to share a few insights I’ve gained from this exploration of what is now often called “white privilege”.

My middle class, Northern European-American family lived in a small rural town on Long Island; we enjoyed material comfort but not “wealth.” I attended public schools with students from myriad ethnic and cultural backgrounds. My family was privileged in ways many of my childhood friends were not—the direct result of our white skin and membership in the country’s dominant social group. In college in the ‘70s, I majored in Comparative Culture, learning about institutional racism and other forms of social injustice. I took to heart the lessons in such courses as “Power in America,” “The Image of Minorities in Film,” and “Social Problems in the Black Community.” My outrage and idealism stirred, I became an investigative journalist, eager to expose corruption, “speak truth to power” and make the world more just for all.

Fast forward to the past several years: My reflections on race and class privilege–and how they influence the consulting work I now do–involve examining implicit and explicit messages I got from family and early surroundings. I observed that, not only was it important to excel academically, I had to be the best student in the class. The youngest child with two older brothers and two older sisters, I also learned that in order to get what I needed and wanted, I needed to be vocal and direct. These lessons had a big effect on my personality and personal and career choices.

Throughout my adult life, close friends have expressed admiration for my courage, assertiveness, direct communication style and honesty. But I know that there’s a dark side to each of these “qualities.”

Some years back, a client commented that while she appreciated these traits during the project I led with her team, “You’re not everyone’s cup of tea.” That stayed with me and I thought about how my socially, culturally and family-generated personality traits had affected my interactions with her staff, mostly people with cultural and privilege backgrounds unlike mine. Had I been too assertive? Too direct? Perhaps what I considered straightforward had come across as confrontational.

My self-awareness has grown in the intervening years. In my program evaluation and strategy consulting with people who self-identify differently than I do, I need to regularly pause and watch what’s going on in my head with perceptions and assumptions—to figure out how the lens through which I’m looking is shaping what’s really happening. I need to pay close attention to how I’m speaking and behaving. I’ve learned that such curiosity and sensitivity vastly increase the odds that my interactions with the client and our project will be successful.

Currently I am working with a well-educated, predominantly white staff team. While it may not look like it on the surface, I’m aware that interacting with these people feels a lot like cross-cultural work. None of the team had been exposed to program evaluation concepts before this project, nor had they chosen educational and career paths like mine. I am working hard to express the relevant ideas and vocabulary in ways that make sense conceptually. I’m also trying to articulate the implicit values I bring about collecting and reporting program outcomes data to people who appear to have distinctly different passions and priorities.

Facing Our Implicit Biases and Weaponized Data

This brings me to the importance of program evaluators being aware of the assumptions and biases we bring to our work. I recently re-read Vu Le’s excellent 2015 blog post, “Weaponized Data: How the Obsession with Data as Been Hurting Marginalized Communities.”

Le describes numerous challenges, including “the illusion of objectivity”, “the delusion of validity” and the “focus on accountability as a way to place blame”. He talks about the “Data-Resource Paradox” in which a smaller organization “cannot get resources because it does not have capacity, so it cannot build capacity to get resources.” The result is that marginalized communities “are left in the dust because they simply cannot compete with the more established organization to gather and deploy data.”

He states that data is often used as a “gate-keeping strategy” to prevent experimentation—the result of another circular logic phenomenon where the lack of evidence of effectiveness is taken as proof that a new approach is actually NOT effective.

Le lists several strategies to de-weaponize data, including involving people from affected communities in hands-on data gathering and interpretation of evaluation findings and combining evaluation efforts with community engagement and mobilization to address inequity.

He also urges the social sector to redefine what constitutes good data, a topic near and dear to me, as followers of my blog know. He states, “It is not that marginalized communities don’t have the data, it is just that the data they do have does not conform to this mainstream definition of what data is or how it should be presented.” He argues, “The definition of data, as well as of “capacity” and “readiness” and other concepts, has been perpetuating inequity and needs to be changed.” Many of us agree with this.

A more academic piece in Nonprofit Quarterly last fall, “Research and Evaluation in the Nonprofit Sector: Implications for Equity, Diversity and Inclusion,” also called out the many ways evaluators harm communities they ostensibly are trying to lift up. They do this by being unaware of their cultural biases and assumptions about gathering and interpreting data. The authors articulate the key challenges facing evaluators, describe the creation of a new tool to help evaluators identify implicit biases and recommend other strategies to build evaluators’ awareness of and sensitivity toward diversity, equity and inclusion.

These compelling reads shed light on the challenges of providing cross-cultural evaluation and inspire me to bring more self-awareness and humility to my work. As I seek to help organizations build their capacity to successfully “deploy” data so they can attract more resources and better serve their clients, I wonder: Am I bringing hurtful assumptions and harmful behaviors with me? How can I avoid this and reduce the harm I might be doing?

Photo by Stuart Miles, freedigitalphotos.com