Beyond ‘ticking the box’: How can evaluation and impact assessments be done better?

Beyond ‘ticking the box’: How can evaluation and impact assessments be done better?

Dr. Alexandra Yannias Walker, Project Manager at MzN International,  provides expert advice on what current Monitoring and Evaluation practices lacks and what could be done to make them better.

Why is Monitoring and Evaluation so important?

In development work, good intentions are necessary, but not sufficient. In an incredibly competitive funding environment, it is imperative that project implementors showcase, as objectively as possible, the output and impact of their work. This is where Monitoring and Evaluation (M&E) practices step in – providing the evidence what has been done and achieved.

Taking a range of forms, from statistical analysis to survey-based approaches, M&E asks a variety of questions about the goals of development-oriented activities. While some M&E processes focus on measuring deliverables, asking, “did we do what we promised to do?”, other processes are concerned with determining the impact of a particular project, asking, “what happened as a result of our intervention, that would have not happened otherwise?”

Without Monitoring and Evaluation, project managers cannot attempt to prove their accountabilitytransparency and effectiveness to their beneficiaries and donors and the public. But that is not to say that M&E is all about pleasing the donors and the press; it definitely helps nonprofits make the most informed decisions regarding program design and implementation strategies. Engaging in such practices and excelling at them provides nonprofits with the tools to better understand the “black box” between program design and outcome.

In short, if done regularly and if done well, evaluation can help to make development better. The problem is: it’s not always done well.

What can we do to make evaluation better?

In my experience, evaluators practitioners and donors alike, from those working at the World Bank to those working within small NGOs, identify similar problems with how evaluation in the aid and development sector currently works:

  • Utilizing and recycling knowledge

Completed evaluations are filed away rather than actively used. The knowledge created by evaluations is not used well and often not used at all, despite the knowledge created. This is partly because the implementing organisation may not implement a similar follow up project or due to insufficient knowledge management.

Findings should be used to analyse what can be done better in future projects, not just by the organisation running the programme.  There is an identified need for evaluative data to be stored, analysed, and actively used in project design and implementation.

 

  • Collecting Baseline Data 

Unfortunately, baseline data, or realistic data regarding the project prior to its implementation, is simply not always collected. Without this data, it is difficult, or even impossible, for evaluators to determine whether the project has been successful, as they are unable to compare the situation after the project completion with the initial basis they built upon.

 

  • On-the-ground Experience

Oftentimes, evaluators do not have the experience working in operations to understand the realities “on the ground”. They can therefore be overly critical in their evaluations, to the extent that they cease to be constructive in helping to improve future programs. Although local knowledge is king, it is often not an important element in evaluation practises and the reports these practises produce.

 

  • Doing more than “Ticking the Box”

Rather than investigating the reasons for particular outcomes in development projects, evaluation and monitoring practitioners are often too consumed with just “ticking the box” to show accountability. This makes it increasingly difficult to replicate results in the future – thus reducing operational sustainability.
There are, of course many more ways in which evaluation should be improved, but having both non-profit organisations and evaluators address these issues would help to make evaluations more useful to making development work better.

So what’s our plan?

In order to truly do development and humanitarian work better, we recognise the importance of engaging more intensely in evaluation work. We will therefore concentrate on a two-pronged approach:

  1. a) actively collecting, sharing and communicating interesting M&E results, and
    b) making the selection, deployment and management of quality M&E consultants easier.

,MzN International’s Training program is already actively engaged providing training in evaluation methodology. Our Impact Evaluation (IME) Courses provide development practitioners with the necessary methodology and practical knowledge to meet the growing demand for evaluating the impact of development projects and programs. It has proven a very popular course, highlighting the importance of the subject.

The course provides participants with a guided step-by-step approach to developing a Theory of Change, and it enables them to critically reflect on how social change happens. Fostering such training programs helps MzN International and its clients make the use of M&E more widespread, and develops evaluation expertise on the ground.

Moving Forward…

MzN International has built a reputation by designing services that are easy to use, rigorously quality controlled and informed by research which bridges the gap between theory and practice. We want to move beyond “ticking the box” by creating partnerships between academic researchers engaged in evaluation research as well as development and humanitarian organsiations working in the sector all over the world. We aim to match technical expertise with the context-specific knowledge that makes for more useful evaluations and leads to more sustainable projects.

 

Specifically we will:

  1. Offer a quality assessed evaluation team in addition to our funding and management advice consultants to our partners at a no-profit pricing
  2. Ensure that all M&E work can be easily contracted and deployed, using our online project management tools to achieve 100% transparency
  3. Publish key results and offer full reports for download and actively share these, free of charge of course
  4. Connect our knowledge and publications with existing datasets.c

This will result in a knowledge base that is based on professionals’ input and a source of reference that is easy to use at the fingertips, open to use for all. With the permission of our partners in the field, we will use data and feedback about project impact across different geographic areas and in different thematic foci. We aim to make this knowledge base fully available to organisations through the web, so that they can use this data in planning their programmes to learn from the previous experiences of others engaging in similar work.

We at MzN International are excited about engaging in this work and would love to hear your feedback about our plans. Please share your thoughts and opinions below, and get in touch with us through our Facebook and Twitter pages!