7 Preliminary Lessons from the 2nd Climate-Eval Conference

The 2nd International Conference on Evaluating Climate Change and Development has ended in Washington D.C leaving many evaluation questions answered and perhaps many more unanswered – even though it would be fair to argue that the conference did not set out to answer all questions associated with the evaluation of climate change and development.

Lesson 1: We Know How to Evaluate Mitigation

There might still be ambiguity and absence of standards and norms, but for the most part, we know how to evaluate climate change mitigation. “We know how to calculate Greenhouse Gas Emissions (GHG)” says Dorter Verner of the Evaluation Office of the Inter-American Development Bank. However, challenges are far from over particularly in relation to mitigation monitoring, reporting and verification (MRV).

Lesson 2: We still don’t have a Consensual Definition of Adaptation and Do we Need One?

Back in 2008 at the 1st climate evaluation conference in Alexandria, Egypt, much of the discussion focused on the definition of adaptation. Sociologists, economists, anthropologists and people in different communities around the world have different understanding of adaptation. “What is adaptation as opposed to the traditional community development carried out in communities for decades?” pondered Uma Lele, veteran development experts and critical observer at the 2nd Climate-Eval conference. While some progress has been made since the 1st conference in Alexandria, there is plenty of work still to be done in this area.

Lesson 3: Progress Made but Much Work Still Needed to Evaluate Adaptation

“We are not there yet” says jyotsna puri, Head of evaluation at 3ie. “There are still many uncertainties” says Joyce Coffee, Managing Director of Notre Dame Global Adaptation Index

Lesson 4: Data Availability Remains a Challenge

There is no evaluation without data. In the traditional development areas, finding credible data is already a challenge. In climate change and adaptation to be more specific, locating and using data remains an incredible challenge. The conference heard many challenges regarding availability of data and how to tackle these challenges. But it emerged from the conference that a revolution in data is required by developing data systems that are useful at all levels. “We need a data wiki” says Joyce Coffee of the Notre Dame Global Adaptation Index.

lesson 5: Establishing the baseline is a mirage?

“Establishing the baseline is a mirage” argues Irene Karani of Lts Africa. “Baseline research is quite expensive and demanding from the point of view of human and financial resources.” So without baselines, how do we measure effectiveness?

Lesson 6: So How Much Rigor is Needed Then to be Acceptable?

It emerged from the conference that we would have to choose from achieving methodological rigor and producing evaluations that can help decision makers design more responsive policies and interventions and solve the poverty challenges facing this generation. But the point is with all these challenges associated with finding the right data, establishing baselines, and so on, what is the acceptable level of rigor expected to evaluate climate change.

lesson 7: Global Search for Best Practices Far from Over

Hearing many practitioners speak, there is a sense that quite some learning was achieved. This is positive from the perspective of organizers. After all, this was one of the objectives of the conference. But what we do know is that there are many other questions left unanswered and the debate and overall quest for global best practices is far from over.


Lesson 1: Although we do

Dbours's picture

Lesson 1: Although we do "know how to calculate Greenhouse Gas Emissions (GHG)", the mitigation MRV (monitoring, reporting and verification) systems still need to further mature when it comes to measuring good governance in mitigation and wider socio-economic benefits.

Lesson 2: The fact that we don't have a unified definition of adaptation and what successful adaptation looks like does not mean that we have been at a stand-still since the Alexandria conference. Concepts and categorizations like 'absorptive capacity, adaptive capacity and transformative capacity' as well as 'adaptation action, and sustained development or adaptation mainstreaming' have developed over time and we now do have a clearer picture on the differences between these categorizations and examples of good M&E implementation in any of them. 

Lesson 3: There will always be uncertainties in the M&E of adaptation. There is no adaptation future that does not have uncertainties, but for example the ND-GAIN index of which Ms. Coffee is the director is a clear example that it is possible to come up with something tangible and useful among all those uncertainties. And as much as we want to reduce uncertainties, we also have to look at how to embrace uncertainties and managing both the uncertainty and the accompanying risks. The risks of doing nothing are much greater than the risk of the uncertainties.

Lesson 5: Clear goals and targets help. I am thinking that Ms Karani is referring to the development of RCTs, ie. comparing results against a control group from a baseline value. Even without RCTs we can still have an initial assessment and use it as a baseline. The use of more robust M&E practices should be seen in the light of the project's size.  

Add comment

Plain text

  • Allowed HTML tags: <p> <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether you are a human visitor and to prevent automated spam submissions.