Thinking Deeply about Evaluation

Thinking Deeply about Evaluation

The Endowment recently invited Virginia Tech professor Tom Archibald and Jane Buckley of JCB Consulting to lead internal workshops on Evaluative Thinking, a set of habits that aims to help leaders make sound judgments using good evidence. Given the increasing focus on evidence-based programs and enhancing foundation strategy, we thought we’d share some of their insights.

Q: So, what is Evaluative Thinking?

Buckley: Evaluative Thinking is a habit of mind. It’s similar to critical thinking in that you don’t take things at face value. You put a high value on evidence, and on thinking carefully about what evidence you might need for any particular decision.

Q: What differentiates it from critical thinking?

Buckley: A critical thinker could be thought of as more passive. Information comes in, and you critique it. An evaluative thinker is more proactive. They seek evidence. They ask the question and then they seek out different perspectives and sources of evidence.

Q: How well-known is this concept in the nonprofit and philanthropic sector?

Archibald: I would say the nonprofit and philanthropic sector is just catching on to it now. They may have talked about similar things over the past 10 years as there’s an increasing focus on evidence and impact. But the particular approach, I think, is brand new within the sector. There’s a lot of exciting potential for growth and development as more nonprofits and philanthropies start being more intentional about their efforts in Evaluative Thinking.

Q: What makes Evaluative Thinking different from conducting evaluations?

Buckley: Historically, there’s this idea that evaluation is summative. It comes at the end, and you make a judgment about something. It feels more like monitoring or auditing.

Evaluative Thinking is a way of working. Evaluation just becomes a tool. Evaluation itself, the collecting of data, doesn’t drive the whole learning system. What should drive the learning system is Evaluative Thinking and, in particular, questions based on thoughtful reflection and identifying assumptions. If you picture a car, Evaluative Thinking is the whole body of knowledge, skills and equipment that a master mechanic uses to diagnose problems and keep the car running smoothly. Evaluation is just one tool in her toolbox.

Q: What skills comprise the Evaluative Thinking ‘toolbox’?

Archibald: The discipline of identifying and testing assumptions, using questions to select the most important data to inform critical decisions, engaging with stakeholders to understand what is – or isn’t – working, and applying evidence in real time to make better decisions.

Q: What’s the benefit?

Buckley: Evaluative Thinking and evaluation at large can support program improvement, optimizing programs – making sure we’re using our resources in the best way to make the best impact. Additionally, foundations like the Endowment can use Evaluative Thinking to achieve clearer strategies that are informed by the perspectives of key stakeholders and are measured and improved in real time. We believe strongly in the value of evaluation, but evaluation without Evaluative Thinking isn’t going to get you there in terms of making the best decisions about programs and strategy.

Q: You both talk a lot about asking the right questions. Why does that loom so large in Evaluative Thinking?

Buckley: If you’re doing evaluation without Evaluative Thinking, what does that look like? It’s mechanical. It’s a question that hasn't been selected carefully. So, for example, take a question like this: How many people participated in your program last year? That’s a question. And you can go collect evidence and you can answer it. I’m not saying that’s not valuable, but in addition, you might want to ask a more thoughtful question. For example, not just how many people attended, but how many people engaged with the content in a meaningful way. If you ask the right question, you’re going to get the evidence that is maximally useful.

Q: What are the main challenges organizations face in trying to develop that culture?

Buckley: The number one thing people say is we don’t have time to add another thing to what we do. Tom and I get frustrated about that because what we’re asking for shouldn’t be an add-on to their time, although sometimes reflective discussions do take time. What we’re asking for is a culture change where questions are welcome, where asking questions is safe. Questions drive conversations.

Archibald: In our experience and reading, we’ve seen another requirement to make that culture change happen, which is that there needs to be both top-down and bottom-up buy-in and interest. So, if the leadership says, “We’re going to do Evaluative Thinking,” and the program people and the rest of the organization don’t see the value in it, it probably won’t work. And vice versa, if the people doing the frontline work are really excited but the leadership isn’t open, it won’t work then, either. You need a psychologically safe environment within the culture of the organization where people can be honest about failures, and take risks trying out new things.

Buckley: If you went to nonprofit or philanthropic program staff and you said to them, “The evaluation work that you’re doing now, does it feel useful? Is it helping you make the decisions that you want to make? Is it addressing the assumptions you think are being made about your program or strategy?” Many, if not all, are going to say it’s not helping me. It’s extra work. It ticks a box so that I can write a report to a stakeholder.

Evaluative Thinking, on the other hand, asks you to stop and reflect, to identify your assumptions, to ask thoughtful questions that lead to the evidence you need to optimize a program or strategy. It really all comes down to getting useful evidence that you can use to increase impact.