Impossible to measure the effect of Norwegian aid
After concluding that none of the evaluations and studies commissioned by their Evaluation Department could report sufficiently on outcome and impact, Norad commissioned CMI and ITAD to do an evaluation of their current policies. The results from the report are presented today.
The report “Can we demonstrate the difference that Norwegian aid makes? Evaluation of results measurement and how this can be improved”, supports Norad’s own statement that they have not been able to report sufficiently on outcome and impact. It concludes that Norway needs a complete reform of the aid administration, and that there is a need to move from a minimal focus on results to a comprehensive and logical system in line with best international practice.
-This implies changing the responsibilities and incentive structure of results documentation, developing proper processes for planning and monitoring grants, providing in-depth guidance for involved staff, establishing a coherence of procedures and strengthening the semi-autonomous evaluation functions, says Espen Villanger, senior researcher at CMI.
To ensure that development aid brings about positive changes it is necessary to conduct evaluations documenting results through solid empirical and theoretical methods. Aid projects have to be designed in a way that makes such assessments possible, he continues.
Baffling maze of rules and procedures
Evaluating Norad’s evaluation practices, the researchers find that the difficulties in measuring outcome and impact stem from a number of reasons. In their report, the researchers provide list of main findings and a comprehensive set of recommendations to match. Firstly, they find that current policies, systems and procedures are too fragmented, and do not provide effective guidelines. Also, there is little clarity about minimum standards. There are 45 different Grant Scheme rules, and each has its own set of rules for results measurement, causing a lot of confusion among staff evaluating aid projects. In effect, this means that which guidelines and standards that are followed varies from project to project, and that there is very little consistency.
To create more consistency and clarity in guidelines and procedures, the researchers recommend that all partners in projects should be required to use standard templates and to outline in greater detail how they plan to measure results, to develop more comprehensive guidance and checklists, and to develop a more strategic approach to the use of evaluations and grants.
Two possible ways forward
Evaluation is not only about guidelines, procedures and regulations. Being able to perform also depends heavily on organisational culture. The organisational culture needs to support the implementation of the organisation’s policies and systems. The report concludes that the staff should receive more support on several levels.
-The staff argue that they possess the necessary skills to review applications and monitor grant performance. However, they express concerns that time pressure and low priority by senior management reduces their effectiveness. Also, training reaches too few staff members, and there are gaps in coverage and little supporting material, says Villanger.
- The pressure to reform the system measuring results, must come from top management in Norad. The political leadership must also be clear in its requirements. It will be a demanding process, says Villanger, but with hard work and strong will, Norway can reach the level of those we like to compare ourselves with, says Espen Villanger.
The CMI-ITAD team outlines two possible and contrasting ways forward; to concentrate expertise or broaden it.
Stronger focus on empirical methods
-There is an enourmous potential for learning from the failures of past evaluations and design of aid interventions, says Villanger.
The field of evaluations is evolving quickly. Future evaluations will focus more on documenting results through empirical methods, according to Villanger.
- This necessitates stronger project planning. Projects have to be designed in a way that facilitates thorough documentation and checks, he says.
Institutions and organisations involved in evaluations need to move fast to keep track. Last year, CMI established a core evaluation team.
-The new team strengthens our evaluation competence and enables us to apply the best possible evaluation methodologies and tools by combining multidisciplinary international research and best evaluation practice, says Villanger.
See articles in Norwegian newspapers:
Stavanger Aftenblad: Resultatmålingen av norsk bistand slaktes