This blog is written by Nathalie Beaulieu, Aliou Diouf and Guy Jobbins
In 2016, the Collaborative Adaptation Research Initiative in Africa and Asia (CARIAA) program invited us to prepare a working paper to share lessons learned while strengthening evaluation capacity of researchers involved in the Climate Change Adaptation program in Africa (CCAA). This experience, and our subsequent reflection, convinced us that developing evaluation capacity is key to developing adaptive capacity.
We (authors) were program officers in the program that supported research and capacity development. Among other responsibilities, we oversaw the application of Monitoring and Evaluation (M&E) approaches by research teams, as part of a participatory action research approach. We also developed a number of activities to strengthen the M&E capacities of research teams and their partners. These activities included training workshops, mentoring in the field by experts, interactions between researchers and program officers as well as learning forums. The training workshops initially focused on the outcome mapping approach and gradually included a number of different tools and methodologies.
The researchers taking part in the CCAA program came from diverse scientific fields and types of organisations. Their evaluation capacities varied greatly. Some had considerable experience in results-based management. One of the motivations for the program to invest in developing evaluation capacity was to ensure that the researchers were collecting the data necessary to demonstrate the effect that the projects were having on the adaptive capacity of the communities involved.
As the program unfolded, the challenges to evaluating changes in adaptive capacity became more and more obvious. But at the same time we could appreciate the important role that participatory evaluation had in catalyzing the adaptation process as well as in enabling legitimacy and trust among participants. This is why the working paper places more emphasis on the latter, while also discussing ways to overcome the challenges to evaluating the success of adaptation.
Most CCAA projects facilitated forums of some kind, grouping diverse stakeholders to discuss development goals and visions, climate-related risks, roles and responsibilities of players, project progress and results. Some of these forums took place at the local level around field learning sites, others at the municipal, departmental and/or national scales. Outcome mapping allowed to translate into “progress markers” the changes in behaviour or relationships that adaptation was expected (or hoped) to produce. Such changes, as well as unexpected ones, were then documented by teams. In some cases, other participatory action research methodologies allowed to determine the immediate effects of tested adaptation options, for example on agricultural yields. Observations made based on outcome mapping and participatory-action-research methodologies were then used by project teams to feed into project reports that followed a results-based management logic, i.e. describing activities, outputs, outcomes, research findings and impact.
The CCAA final evaluation found that evaluation capacity was among the key capacities strengthened by the program for researchers involved. We have found that the approach of using outcome mapping and participatory action research, feeding into results-based management, allowed addressing accountability and learning imperatives. The demonstration of increased adaptive capacity can benefit from a description in changes in behaviours, policies or relationships. Integrating participatory evaluation methodologies into research, planning and knowledge-sharing processes can help reduce costs and simplify logistics. It also allows sharing evaluation capacity with other actors involved in adaptation. Having a flexible approach, allowing the documentation of unexpected results and the definition of new markers or indicators along the way, can help overcome the challenges related to an evolving climate and changing environmental conditions. Monitoring and Evaluation can best contribute to legitimacy and trust if the information collected is useful to participants, if it is adequately shared, and if it is used for decision-making /action planning. Some organisational, political and cultural challenges can also be overcome by recognising achievements made and by considering failures or mistakes as opportunities to learn. It is important not to only strengthen the evaluation skills of individuals, but to also encourage an evaluative culture within organisations.
The growing number of local, national and international adaptation initiatives provides opportunities to apply these principles and to document improvements in adaptive capacity.
A copy of the full paper is available here.
The CCAA program ran from 2006 to 2012 and funded 41 research projects in 33 African countries. CCAA and CARIAA were joint initiatives between Canada’s International Development Research Centre (IDRC) and the United Kingdom’s Department for International Development (DfID).