SOME REFLECTIONS ON AID EFFECTIVENESS
By Ajit Chaudhuri
‘If you do not know where you are going, any road with get you there.’: Lewis Carroll
The past twenty years have seen many changes in the development sector. Where once the government was the critical agent of change, the baton has passed to NGOs, the private sector and, more recently, back to the state and to government-owned NGOs and panchayati raj institutions. Where once there was a comparatively equal relationship between the providers of resources and the implementers of interventions (or at least a semblance of a dialogue), funding agencies now exercise absolute power. Where ‘primary education’ was once seen as the critical sector, whose benefits lead and direct change in other sectors relating to development, ‘governance’ and ‘livelihoods’ have usurped that role. And finally, the positive relationship between development aid and development, that was once taken for granted, now has to be proved beyond doubt. It is this, the increased importance of aid effectiveness in the development discourse, that is the subject of this paper.
Aid effectiveness is defined as the extent to which an intervention has attained, or is expected to attain, its major relevant objectives efficiently, in a sustainable fashion, and with a positive institutional development impact (as per the Glossary of Key Terms in Evaluation and Results Based Management of the Development Assistance Committee of the OECD). Put simply, it is a measure of (or judgement about) the merit or worth of an intervention, and it addresses the question of whether one’s resources and efforts have brought or are bringing about desired change (or showing results). Put complicatedly, it is about whether an intervention is relevant, efficient, sustainable, and with impact. And the development sector has seen the burgeoning of an ancillary industry in the field of monitoring and evaluation to address these questions.
The need for monitoring and evaluation is aptly described in the following statements –
• If you do not measure results, you cannot tell success from failure.
• If you cannot see success, you cannot reward it.
• If you cannot reward success, you are probably rewarding failure.
• If you cannot see success, you cannot learn from it.
• If you cannot recognize failure, you cannot correct it.
• If you can demonstrate results, you can win support.
This is all very well, and this author hesitates to dispute the need for addressing questions around aid effectiveness, or for setting up monitoring and evaluation (M&E) systems to accomplish this. This paper does, however, look to question the hugely increased, and rapidly increasing, role of M&E within development, especially in proportion to the tasks of actually managing and administering development programmes. It does so by highlighting gaps between rhetoric and reality in the field of M&E. It suggests that, beyond a point, this is a ‘socially useless activity’ driven primarily by the needs of its proponents, and it distorts development aid by directing funds towards interventions wherein results are easily quantified and quickly discerned.
The first set of difficulties relating to aid effectiveness applies to evaluators.
An M&E system requires the outlining of objectives, inputs, activities, outputs, outcomes, targets and indicators for a development intervention, and the setting up of systems to collect, collate and analyze data around these and to disseminate information to different levels in a timely manner. At issue is the assumption that objectives are known, clear and consistent; this is at variance with all experience – they are usually multiple, conflicting, vague and occasionally repugnant, mirroring the complexity and ambivalence of human social behaviour. Choosing the objectives against which to monitor and evaluate invariably has little to do with an intervention’s actual purpose and a lot to do with ease of computation. Also at issue is the difference between outputs (defined as ‘the products, capital goods and services that result from an intervention) and outcomes (defined as ‘the short or medium term effects of an intervention’s outputs’), which serves to discern whether an intervention is merely ‘doing the work’ or actually ‘achieving results’. On the theoretical drawing board, the difference is like night and day; in the field, the chances of getting unanimity on an intervention’s outputs and outcomes is less than those of getting Afghan warlords to agree upon a peace plan.
Another matter is that of impact! By definition, impact happens in the medium to long term – often well after an intervention has ended – and it is rarely attributable to a particular intervention. The resources for an impact assessment are, however, available as a part of the intervention – either towards its end, or in its immediate aftermath, but never five (or whatever) years after the intervention when it is somewhat assessable. And the debate around attribution is circumvented by the attempt to show ‘contribution’ towards broad change, with nobody knowing quite what ‘contribution’ is.
And finally, can an evaluator actually trash a bad intervention or wholeheartedly praise an excellent one? The politics around M&E make this difficult, especially as people’s (including the evaluator’s) livelihoods are at stake. The latter leads to accusations of delivering a ‘snow job’, and therefore the need to throw in a few perfunctory negatives. The former leads to competing explanations about an intervention’s failure, and therefore whether it should be abandoned (the ‘theory of change’ underlying the intervention is weak), continued as it is (it has had insufficient time to achieve its objectives), continued with changes (its implementation is weak) or enhanced (it has insufficient resources to achieve its objectives) – and the directions set are invariable based upon political judgement rather than analytical integrity.
The second set of difficulties relating to aid effectiveness applies to the implementers of development interventions. Operations people are notoriously uninterested in data collection – their task is to make things happen, not to fill forms so that M&E personnel can later make suspect use of the information (including making M&E people look good and operations people look bad). M&E systems typically create rules and reporting requirements for implementers that divert them from their actual work and create perverse incentives towards a focus on short-term results and a stifling of innovation. A standard grouse in most implementing organizations in the field of development today is the amount of time spent, at all levels, dealing with compliance issues that have little relation to or bearing on their work.
A paper by Andrew Natsios of USAID entitled ‘The Clash of the Counter-Bureaucracy and Development’ in 2010 (available at www.cgdev.org) identifies two basic causes for this dysfunctional state of affairs. The first is the rise of a group of people he calls the counter-bureaucracy, who deal with issues of accountability and oversight and who pressure for increased scrutiny of aid effectiveness and for clearer demonstration of value for money. The second is what he terms ‘obsessive measurement disorder’ or the belief that the more an activity can be quantified, the better the policy choices and management of it will be. The two combine to cause ‘goal displacement’ – increasing the importance of formalistic goals over the substantive goals of an intervention and creating a situation wherein donor goals (that are largely set by the counter-bureaucracy) do not reflect the goals of the recipients of aid. Natsios says that it is a great pity that support for development interventions that can be transformative, but whose results are harder to measure and often do not emerge for years, is diminishing. He also makes some suggestions for issues around the effectiveness of aid. These include –
1. Devising a new measurement system for results that acknowledges that short-term quantitative indicators are not appropriate for all interventions.
2. Adopting different M&E methods for interventions in service delivery, institution building and policy reform.
3. Researching the effect of the counter-bureaucracy on aid effectiveness with a view to reducing the compliance and reporting burden.
4. Overtly recognizing foreign policy as an objective of some interventions, and judging these against political rather than development objectives.
5. Ending the use of disbursement rates as a performance measure.
6. Devolving programming and decision making to the lowest possible level.
Most development practitioners would find connection with Natsios’ views. This author, with 20 years in development behind him, would like to add that many issues around aid effectiveness stem from the fact that a critical mass of people in positions of power within the development sector have never actually managed an intervention at a level at which they have to interface with a ‘beneficiary’ community (a result of recruitment policies that parachute bright young people into the headquarters). To such people, as a wag remarked ‘poverty is like Bihar – never been there, but don’t like it anyway’. There is a conflict between their own belief that implementation is a grubby chaotic thing best left to the foot soldiers, and the knowledge that it is the basis for a development agency’s existence. The resultant insecurity sets aid effectiveness up as an instrument of power and control, rather than as a tool for the better delivery of development services. It is time for a re-think!