Musings on Cultural Humility in Evaluation

I’ve been thinking a lot lately about my work with diverse leaders and staff of nonprofits who have sought my services to select meaningful indicators to track program outcomes and then to create practical strategies for collecting, analyzing and reporting the data to stakeholders.

A couple of recent experiences prompted me to reflect on this “evaluation capacity-building” work and what it really involves. I’ve been considering the ways I act out unspoken assumptions and pondering such questions as “which lenses am I using when I do this work?” and “whose values matter?” I’m wondering whether what I have, in the past, perceived as client resistance to evaluation may have actually been resistance to some of the “white supremacist” attitudes and behaviors I unconsciously brought to the work (see White Supremacy Culture in Organizations).

Two years ago, I started a journey of self-exploration around my evaluation work and shared the post “Examining Cultural Bias in Evaluation” . I cited the incomparable sector leader and blogger Vu Le, whose seminal 2015 post, “Weaponized Data: How the Obsession with Data Has Been Hurting Marginalized Communities” turned a lot of heads including mine.

I’m now continuing my journey to better understand how I manifest deeply ingrained beliefs and values that may harm people who are different from me and see how I can begin to unpack them. My goals are to bring more self-awareness and humility to my work, develop improved relationships with my clients and their stakeholders, and, in the process, advance the cause of social justice.

What Exactly is Evaluation Capacity-Building?

I recently took a two-day, in-person course on evaluation capacity-building (ECB). I was intrigued by the course description because of how challenging I have often found this work to be. The course turned out to be not exactly what I expected.

The engaging instructor—a seasoned evaluator and university professor--provided an overview of the research on ECB. We reviewed published studies on ECB interventions shown by “high-quality” evaluation to create such outcomes as an uptick in “evaluative thinking” and a group’s increased use of more rigorous evaluation methods. She then offered a tool--the familiar logic model—to use in designing an ECB intervention--our class project.

Because I’ve used logic models in most of the ECB interventions I’ve led over the past dozen years, its format was familiar; I felt comfortable starting one for an upcoming consulting project, the third in a series, with a repeat client I know well.

As students shared early drafts of their logic models and asked questions about each other’s models, however, I started feeling uneasy. I had hoped the instructor—and the similarly intentioned people in the class--would steer me to good ways to help groups shift attitudes and behaviors toward a more pro-evaluation organizational culture--what I believe constitutes "evaluation capacity-building". Instead, I sensed an emphasis on creating a good logic model.

This observation was a big aha for me. I’ve relied on logic models in my work and have often asked clients who found them challenging to “suspend their disbelief”. I told them that having a model linking activities and intended changes over time was going to help them focus their resources to increase the odds of achieving those results. In the ECB course, I believe I got a taste of my own medicine. Later, I wondered if my discomfort was similar to that of former clients who questioned the merit of drawing boxes and arrows in a static, two-dimensional flow chart that ignored a program’s multi-faceted historical, social, political and economic context.

To be fair, it wasn’t realistic for a short course like this to consider the background, complexities and nuances of each student’s program. Nor was there time to delve into organizational development and the dynamics of culture change. At least I can now take some time to explore more fully the ECB evaluation literature the instructor referenced.

While I appreciated the course, I came away with only an inkling of how to shift hearts and minds toward prizing program evaluation for learning. Perhaps more important was the insight that, in the future, I should use logic modeling more thoughtfully.

Different Ways of Knowing

A few days after the course, I met up with an old friend, a Jungian analyst with life-long reverence for Native American culture and spirituality. I mentioned the course I’d taken and described how it related to my work with nonprofits. He was blunt about the harm he believed had been done by his field’s reliance on rigorous evaluation to narrow the sanctioned psychological interventions to “evidence-based practices”. He had particularly harsh words about the impact of “white man’s thinking” on the highly personal and contextualized work of psychotherapy. I was a bit taken aback at first, but later I read the article he shared on different ways of knowing, which I found fascinating.

The piece, “Two Ways of Knowing” (Sun Magazine, April 2016) featured an interview with indigenous, scientifically trained botanist Robin Wall Kimmerer, a member of the Citizen Potawatomi Nation and a Distinguished Teaching Professor at the State University of New York, Syracuse.

Kimmerer talked about the importance of relationships and mutuality in any kind of scientific exploration. She noted that the word “impact” implies that the actor undertaking either an “intervention” or a study of one is outside the group of people the intervention seeks to influence or the study aims to understand. This sounded to me a lot like evaluation and evaluation capacity-building—things done to others.

Kimmerer stated that the problem with approaches relying solely on scientific inquiry (including experimental design and other quantitative forms of evaluation) is that they ignore the reality that the observer always influences the observed; observation effects cannot be separated nor fully “controlled for.” Instead, we need to design more respectful, shared-power partnerships if we are to gain deeper understanding about how something really works, either a forest ecosystem or a social sector program.

As happens so often when we shift our attention in a new direction, I’m finding a great deal of synchronicity as I explore these issues of greater self-awareness and cultural humility.

The Kimmerer interview reminded me of a 2017 Non-Profit Quarterly piece I’d read called "Multiple Ways of Knowing," by Elissa Sloan Perry and Aja Couchois Duncan. This piece describes four types of knowledge—practical, artistic, generalized and foundational. It’s well worth a read so I won’t go into it here. The piece concludes with the following quote from Peter Reason’s 1998 A Layperson's Guide to Co-operative Inquiry:

“Knowing will be more valid—richer, deeper, more true-to-life and more useful if…our knowing is grounded in our experience, expressed through our art, understood through theories which make sense to us, and expressed in worthwhile action in our lives.”

Is Quantitative Evaluation Hurting DEI Efforts?

Lastly (for now), I read with keen interest a recent Chronicle of Philanthropy interview with American Evaluation Association Board Member and long-time evaluator Jara Dean-Coffey.

The article notes that Dean-Coffey, Director of the Equitable Evaluation Initiative, believes something important is being overlooked in foundations’ attempts to improve their diversity, equity and inclusion, or DEI. That something is how we define and measure success. She sees the reliance on statistical data as not only inadequate, but likely damaging to the very DEI efforts philanthropy is focused on as a way to build social justice.

According to Dean-Coffey, among the troublesome “orthodoxies” that have dominated foundation and social sector evaluation for decades, is the belief that the foundation sponsoring an evaluation is the key user of the its findings--not the nonprofits doing the work nor the stakeholders they work with. Underlying all of the dominant evaluation practices, Dean-Coffey says, is the idea that data is neutral and doesn’t reflect the biases of the people who collect and analyze it—the same point Kimmerer made.

In the piece, Dean-Coffey pointed to the "gold standard" of social sector evaluation, the randomized, controlled (experimental design) trial. She said that while the approach allows researchers to make generalized statements about a program’s impact, it doesn’t take into account the historical or geographical context of the program and the people it is serving. So, on the ground, it may be of little practical value.

I am grateful to Kimmerer, Sloan Perry, Dean-Coffey, Vu Le and many other pioneers in this effort to re-imagine more equitable, less harmful evaluation approaches. I will share more about my learning journey in a future post.

Many hands photo: Perry Grone, Unsplash