Defining Evaluation’s Role: A Conversation with Bill Bacon

Defining Evaluation’s Role: A Conversation with Bill Bacon

Bill Bacon joined The Duke Endowment in 2009 after five years as a grantmaker and evaluator at the David and Lucile Packard Foundation. In the interview below, he talks about the need for evaluation and describes the role it can play in philanthropy.

In philanthropy, is there a standard definition for “evaluation?”

A common one is “the use of social science research methods to systematically investigate the effectiveness of social intervention programs.” That definition emphasizes the role of evaluation in determining “what works” – a program’s effectiveness or impact. But the term is also used to describe situations where you are collecting information to learn about how a program is being implemented and to help implementers improve their programs.

Bill Bacon joined The Duke Endowment in 2009 as Director of Evaluation.

What’s behind the debate about whether evaluation should be for “proving” versus “improving” programs?

This debate is mostly about methods. Some folks in the proving camp argue that to get really solid evidence that proves a program’s effectiveness, you should use the “gold standard” of research methods – the randomized controlled trial. Others say the real purpose of evaluation is to improve programs, and for that other methods might be more suitable and less costly.

Which side do you take?

I don’t take a side because I think it’s a silly argument. Clearly, evaluation can be useful for answering both types of questions. But different questions call for different sorts of evidence and there is no gold standard method that is superior in all cases. Method choice has to be matched to the evaluation purpose.

Where is The Duke Endowment when it comes to evaluation?

Each program area at the Endowment has evolved toward greater use of strategy by directing more resources toward special programs or initiatives. Child Care, Health Care, Higher Education and Rural Church have “areas of work,” which we see as a body of grantmaking that has a definable goal and a coherent set of approaches or strategies. The Endowment currently has 11 defined areas of work. 

Tell us about the “strategy toolkit.”

In 2012, staff presented a grantmaking framework to Trustees consisting of three tools: a grantmaking plan, a logic model, and a dashboard. The grantmaking plan is a narrative description of a grantmaking strategy. The logic model is a visual description of that strategy. The dashboard summarizes progress against the key outcomes envisioned in the strategy.

Where did you go from there?

Staff began working within their program teams to draft the three products for each of their areas of work. We started with logic models, since they depict the core of the grantmaking strategy.

And then?

We worked on the grantmaking plans, since those narratives lay out the research basis for why we chose these particular strategies.

The dashboards came last.

Describe them for us.

They are a tool to track progress against strategy. Outcomes from the corresponding logic model are carried over to the dashboard, along with specific “metrics” that have been identified as useful markers of progress. We conceived them as a single page, with a mixture of text and graphics.

What are the major components?

A summary of the overall goals of the grantmaking strategy and approach; spending in relation to other grantmaking in the program area; desired outcomes and metrics, along with time-bound targets and current progress; and other contextual information.

Will they be updated? 

Yes. Twice a year.

Who will use them?

We want them to be useful for Trustees, program staff and grantees. I think their clearest use is as a management tool to remind us of the big picture and what we’re trying to accomplish.

How have you explained this process to grantees?

We had meeting with grantees where we would say, ‘This is what we’re thinking. What are we missing? Is this the way you think of things? How would you measure this?’ We incorporated that feedback.

We’ve been clear that we’re struggling with this ourselves – that we want to hold ourselves accountable and that this is an effort to do that. It’s important to enter into this in a spirit of learning and a respect for the expertise they have.

What was a big challenge?

With some projects, it was difficult to define in black and white terms which outcomes we expected over the next five years and how we could measure them. In some cases, those conversations revealed that different program officers really had different priorities.

Getting the program areas on the same page can serve a larger purpose of making sure that everybody understands why we are doing the project and why we think it is going to add up to something really meaningful. 

Any advice for other foundations?

There’s no substitute for struggling through it.

Contact Us

William F. Bacon
Director
704.969.2136