Communicating Evaluation to Non-Evaluators

This blog was originally posted on BetterEvaluation by Simon Hearn and the original post can be found here. We feel that the post and linked materials are very relevant to those working in the environmental and climate related evaluation fields, and as such this blog is being re-blogged by Simon Hearn here on Climate-Eval. 

 

 

The Overseas Development Institute (ODI) has recently published a "10 Things to Know About Evaluation" infographic, in support of the International Year of Evaluation. I was part of the team that drafted it and over 9 months, 8 meetings and 16 revisions I discovered just how difficult it can be to communicate a complicated set of ideas to a non-expert audience.

Ten Things to Know About Evaluation

 

The Challenge of Speaking to Different Audiences Parallels the Challenges of Communicating Evaluation Findings

The publication is aimed at people engaged with or interested in international development, but who don’t really know what evaluation is – or how it can be used. Our thinking in this field has been shaped through years of conducting evaluations, our involvement in BetterEvaluation, our work with the Impact Evaluation Methods Lab and our long-standing hosting of the Outcome Mapping Learning Community. We wanted to bring together what we’ve learned, dispel some of the myths about evaluation and break down barriers between the programme implementers and the evaluators.

The tricky thing was, we knew evaluators would be one of the first groups to jump on it – either because they want to pick it apart or (hopefully) because they see it as useful for communicating evaluation to clients and colleagues. And so, it had to be technically sound but jargon free. For this reason, we had to abandon most technical definitions. While the communications specialists were protecting the accessibility of the messages, the evaluation specialists were protecting the nuances and accuracy.

In many ways, this paralleled the challenges of communicating evaluation findings: how do you capture variation and nuance, while presenting a concise set of clear messages?

 

Striking the Right Tone Was Important to Delivering Our Message

We were aiming for a positive tone, which was actually quite hard since most of the points came from negative experiences of bad or mediocre evaluations. If we were writing the ‘10 Don’ts of Evaluation’ we probably would have been done months ago, including things like:

  • Don’t claim success based on unsystematic analysis of biased data.
  • Don’t use ‘cookie-cutter’ approaches, not thinking about purpose, questions or context.
  • Don’t start with an answer and then search for data to support it.
  • Don’t treat evaluation as a one-off study undertaken only at the end of a project.

These experiences divided us. On one hand, we wanted to present evaluation as a professional field. It has methods, skills, competencies and technical standards that professional associations around the world uphold rigorously. On the other hand, we wanted to communicate it as something accessible to everyone, especially those managing and implementing programmes. In the end, we had to think back to our target audience. These fallacies wouldn’t mean much to someone who is new to the field.

 

Talking About Success and Failure Is Risky Business

One of the important messages was that success and failure are not black and white. Rarely will an intervention be 100% 'successful'. Some aspects might have worked at that time, in that place and for that particular group, but others might not. Or not yet, or perhaps not with the measures used. The job of an evaluation is to bring all this evidence together and apply transparent criteria (developed through a transparent process) to make an overall judgement. It’s the understanding of what worked and what didn’t work, where, when and for whom that leads to learning and improvement.

It’s important to recognise the risks in using such value-laden words as ‘success’ and ‘failure’ in a world where so often 'failure is not an option'. Funders and senior staff often explicitly or implicitly expect projects to demonstrate high impact, cost-effective results. Speaking about success and failure can exacerbate fear driving project implementers away from good quality, technically competent evaluation, towards informal, unsystematic forms. The latter may give them greater control but ultimately provides a shaky foundation for decision-making.

 

Evaluation Practice Can Be Highly Political

But to keep our messages simple, we had to brush over many of these contentious issues. In the end, we feel it retains just enough of what we hoped to say and communicates it in a way that gives it a good chance of being read by new audiences.

 

Now that we’ve put our list out there, we’d like to hear from colleagues what you would put on your list. What would you say about evaluation to your non-evaluator friends and colleagues and how would you say it?

 

BetterEvaluation is the original poster and owner of the content in this blog post.

 

 

Comments

Love it. I missed the

Dbours's picture

Love it. I missed the infographic through my ODI news feed, but happy to catch it here. Immediately saved it, given it is powerful in all its simplicity.

 

You know, to me there is bias and there is subjectivity. I have an issue with bias, though I am open to subjectivity as long as I know what is objective, sec, and what is more subjective. People who are really close to the project often have a higher level of subjectivity towards its outcomes, but at the same time their information and - at times - anecdotal knowledge might provide unique insights as to the function of the project and its results.

 

A few days ago at a reception (thank you Matt Keene!) I talked to an evaluator's daughter of about 10 years old. Her mother said; "Dennis is an evaluator, just like mommie." And I said; "Do you know what an evaluator is?" And she shook her head, no. I said; "Your teacher sometimes gives you exercises, and then in the end he will take a big red pencil and he will grade your work." Yes, she recognised that. He is then evaluating your work. He gives you a grade that tells you how well you did. "How do you feel about that, when he gets out his big red pencil?" She wasn't too happy about that. I said; "That is exactly the problem we have when we do our work. People think we come with a big red pencil to grade their work. And it scares them."

So I asked her; "How do you learn? How do you know what you do well, and what you need to improve?" Exactly, it is your teacher telling you that. And he can do so because he evaluated your work. And what we try to do is to not scare people with big red pencils. We try to make it a discussion, so that we learn about their project, about their work, and how well they did, and the people in the project learn what they did well and how they can further improve. Just like your teacher helps you learn.

Thanks for sharing your story

Thanks for sharing your story, Dennis. It reminds me of an exercise I've seen to try to get researchers to communicate their research more clearly: the grandparent test. The idea being that if you can't explain your research to your grandparent or elderly relative then you're not likely to be able to convince a policy maker.

I like your addition about bias and subjectivity as well. I was at a meeting yesterday with an evaluation commissioner who explained that for their purpose they didn't need complicated statistical 'objective' verification; simple triangulation by asking a few informed people would be sufficient. It usually comes down to purpose, users and uses.

I found this article very

I found this article very useful especially for non-evaluators because it gives them an overview of why evaluation is needed and how it is done. As a student in different level study (BSc, MSc, PhD), I have been under evaluation for my thesis projects, and course works for a long time. Hence, I have experienced different rate of stress when I was under evaluation because all evaluators don't think the same and the above objective sometimes be overlooked. It all depends up on the personality of the evaluators. Some evaluators stick to the guidelines and evaluate in such a way as to objective match. Others, they still say they do it but at times reflect negatively, even if, your work is fine and to the standard. As biases are common in the work under evaluation so are evaluators. How do you address biases of evaluators? From my experience the following points influence evaluators:
1) Race or ethnicity - some evaluators believe that some race are better at work than others and they reflect it on evaluation
2) The school/ the university attended- which university attended by the non-evaluators affects the judgement of the evaluators
3) Discipline - some prefer basic sciences others prefer applied sciences. Because the evaluator may think applied sciences are good at job while the basic sicence are not. It is personal bias,it does not exist in reality.
4) looks are also deceptive - evaluators see pictures and become biased. They say beautiful, handsome, fat, ugly etc and that influences their evaluation.
I may not be a technical person in this regard, but I think biases are everywhere. This is a bias world. But there are also few good personality people, who try to be balanced, modest and less biased.

While the above article is ideally right, there are human behaviour that do affect evaluation, oftentimes negatively.

Add comment

Plain text

  • Allowed HTML tags: <p> <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.