Biases in impact evaluation

Andrew Zubiri
Climate Investment Funds (CIF)
Content Moderator
Blog Date:

You've finished writing your evaluation report containing a neat LogFrame and objective and verifiable indicators. Rating: highly satisfactory. That's good, but is it true for the development intervention as a whole?

In a guest post at the World Bank's Development Impact blog, Martin Ravallion criticizes development evaluation that assesses projects in isolation of the entire development portfolio. He cites common pitfalls in conducting evaluation, such as the assumption of a negligible interaction effects among project components and a tendency for selection bias. But these biases are sometimes inherent to the programs and policies in consideration. Difficulty arises when taking a sample of roads, dams, and other big-ticket projects, or even multifaceted policy reforms.

If we are serious about assessing "develoment impact" then we will have to be more interventionist about what gets evaluated and more pragmatic and eclectic in how that is done.

Ravallion recommends the central coordination of what gets evaluated and thinking of creative ways to evaluate portfolios as a whole that considers interaction effects.

Does your evaluation workflow for climate mitigation and adaptation projects encounter and consider these biases? How do you deal with them, if at all?

Source: Development Impact via Smart Aid