Dr. Alexandra Yannias Walker, project manager at MzN International, provides expert advice on what current monitoring and evaluation practices lack and what can be done to make them better.

Why is monitoring and evaluation so important?

In development work, good intentions are necessary, but not sufficient. In an incredibly competitive funding environment, it is imperative that project implementors showcase, as objectively as possible, the output and impact of their work. This is where monitoring and evaluation (M&E) practices step in – providing the evidence of what has been done and achieved.

Taking a range of forms, from statistical analyses to survey-based approaches, M&E asks a variety of questions about the goals of development-oriented activities. While some M&E processes focus on measuring deliverables, asking “did we do what we promised to do?”, other processes are concerned with determining the impact of a particular project, asking “what happened as a result of our intervention, that would have not happened otherwise?”

Without monitoring and evaluation, project managers cannot attempt to prove their accountabilitytransparency and effectiveness to their beneficiaries and donors and the public. But that is not to say that M&E is all about pleasing the donors and the press; it definitely helps non- profits make the most informed decisions regarding programme design and implementation strategies. Engaging in such practices and excelling at them provides non- profits with the tools to better understand the “black box” between programme design and outcome.

In short, if done regularly and if done well, evaluation can help to make development better. The problem is, it’s not always done well.

What can we do to make evaluation better?

In my experience, evaluators, practitioners and donors alike, from those working at the World Bank to those working within small NGOs, all identify similar problems relating to how evaluation in the aid and development sector currently works. These include:

Utilising and recycling knowledge

Completed evaluations are filed away rather than actively used. The knowledge created by evaluations is not used well and often not used at all, despite the knowledge created. This is partly because either the implementing organisation may not implement a similar follow- up project or there is insufficient knowledge management.

Findings should be used to analyse what can be done better in future projects, not just by the organisation running the programme.  There is an identified need for evaluative data to be stored, analysed, and actively used in project design and implementation.

Collecting baseline data 

Unfortunately, baseline data, or realistic data regarding the project prior to its implementation, is simply not always collected. Without this data, it is difficult, or even impossible, for evaluators to determine whether the project has been successful, since they are unable to compare the situation after the project completion with initial baseline data.

On-the-ground experience

Often, evaluators do not have the experience working in operations to understand the realities “on the ground”. They can therefore be overly critical in their evaluations, to the extent that they cease to be constructive in helping to improve future programmes. Although local knowledge is king, it is often not an important element in evaluation practices and the reports these practices produce.

Doing more than “ticking the box”

Rather than investigating the reasons for particular outcomes in development projects, evaluation and monitoring practitioners are often too consumed with just “ticking the box” to show accountability. This makes it increasingly difficult to replicate results in the future – thus reducing operational sustainability.
There are, of course, many more ways in which evaluation should be improved  and both non-profit organisations and evaluators address these issues would help make evaluations more useful in making development work better.

So, what’s our plan?

In order to truly do development and humanitarian work better, we recognise the importance of engaging more intensely in evaluation work. We will, therefore, concentrate on a two-pronged approach:

  1. Actively collect, share and communicate interesting M&E results.
  2. Make the selection, deployment and management of quality M&E consultants easier.

MzN International’s training programme is already actively engaged in providing training in evaluation methodology. Our Impact Evaluation (IME) courses provide development practitioners with the necessary methodology and practical knowledge to meet the growing demand for evaluating the impact of development projects and programmes. These have proven to be very popular courses, highlighting the importance of the subject.

The course provides participants with a guided step-by-step approach to developing a Theory of Change, enabling them to critically reflect on how social change happens. Fostering such training programmes helps MzN International and its clients make the use of M&E more widespread and develops evaluation expertise on the ground.

Moving forward…

MzN International has built a reputation by designing services that are easy to use, have rigorous quality controll and are informed by research which bridges the gap between theory and practice. We want to move beyond “ticking the box” by creating partnerships between academic researchers engaged in evaluation research as well as development and humanitarian organisations working in the sector all over the world. We aim to match technical expertise with context-specific knowledge that makes for more useful evaluations and leads to more sustainable projects.

Specifically, we will:

  1. Offer a quality assessed evaluation team in addition to our funding and management advice consultants to our partners at a no-profit price.
  2. Ensure that all M&E work can be easily contracted and deployed, using our online project management tools to achieve 100 per cent transparency.
  3. Publish key results and offer full reports for download and actively share these, free of charge, of course.
  4. Connect our knowledge and publications with existing datasets.

This will result in a knowledge base that is based on professionals’ input and a source of reference that is easy to use and open to use for all. With the permission of our partners in the field, we will use data and feedback about project impact across different geographic areas and with different thematic focuses. We aim to make this knowledge base fully available to organisations through the web, so that they can use this data in planning their programmes to learn from the previous experiences of others engaging in similar work.

We at MzN International are excited to engage in this work and would love to hear your feedback about our plans. Please share your thoughts and opinions below, and get in touch with us through our Facebook and Twitter pages!