Empowering learning for just and peaceful societies

Empowering Learning for Just and Peaceful Societies

1.jpg

International development is slowly waking up to the realisation that existing programming frameworks, often driven by the need for accountability, due not hold true in sight of the true complexity of local dynamics and do not deliver neither for beneficiaries, nor implementing organisations, nor the donors themselves. The truth is: we often do not know the solution to an existing problem and, as such, it is difficult to predict if and how a project will work out. Enter adaptive programming, where we design, test and evaluate different programming strategies to find out what works (best).

As an M&E consultant, I am given intimate access to my client’s MEL systems – or, more often than not, the lack thereof. Working with a large variety of organisations, from civil society to Parliament, I have noticed some well intended but potentially damaging practices when it comes to developing an organisational monitoring system.

Working with Adapt Peacebuilding on introducing robust monitoring and evaluation into its existing programming approach conducting systemic action research led by local communities in Kachin State, I am reminded of these lessons learned. I am sharing them today in the hope that others might find them helpful or provide additional lessons that might complement my thinking.

1 – People at the centre

Colorized%2BSteve%2Bpicture.jpg
Interesting in learning more about M&E?Check out this video presentation by Stephen Gray to FundaciĆ³n Ideas para la Paz which focuses on systems approaches to monitoring and evaluation. Watch Video

M&E guidances are often packed with technical language – concepts such as theory of change, logframes, baselines… they do not help in getting staff excited about monitoring evaluation. Too often than not, subconsciously or overtly, M&E experts are thus shutting themselves away in their M&E ivory towers, incomprehensible and inaccessible for the average project stakeholder. And they are not doing themselves a favour by it – the result is an increased workload as the development of MEL frameworks, data analysis and reporting is often left to them as “the experts”, and logframes that never get updated, with indicators that are rarely monitored.

The original idea behind monitoring and evaluation is that it will provide the programme manager with the tools he needs to understand how his or her programme is progressing, if it is achieving its intended results or if he needs to adapt the programme approach in any way. As such, outsourcing the MEL work to a M&E staff or even consultant is counter- productive (The role of M&E staff and consultants is to advice and support the programme teams but they cannot do MEL on their behalf).

As such, it is our responsibility as M&E consultants to identify key MEL stakeholders from the onset (ideally – everyone from donors to organisational management to programme staff to programme beneficiaries) and take them with us on the MEL journey. This often means “fixing” their relationship with M&E and enthuse them for the information they are about to collect – a whole new world of evidenced based decision making if you will.

2 – A good M&E system is one that is being used

And how do you ensure that a system is used? By making it useful.

The issue with most M&E systems is that they are developed at proposal stage, often by an individual not necessarily involved in the later programme implementation, based on donor priorities, with “easy to measure” indicators tacking precedence over what the programme manager would actually need to explore.

When it was developed, the logframe or logical framework approach was at its core a simplified version of a programmes theory of change, with objectively verifiable indicators developed to test the assumptions. However, we now know that 1. it is rarely used correctly – as a meaningful inquiry into a programmes theory of change, and 2. the rigidity of the framework does not encourage adaptation of the programmes theory of change, even when indicator evidence suggests that the assumptions on which the theory of change.

Rather, M&E consultants should focus on facilitating a conversation to tease out how programme managers currently take management decisions and what (additional) information they would need to make better informed decisions. Stakeholders in this might go beyond programme management and include operational staff, as well as organisational leadership. Donors have an interest in teasing out learning about programme approach and what works, so they would certainly welcome being able to input in this conversation.

Programming in complex environments might imply that we often have difficulties predicting what kind of changes we will be able to observe – so milestones and targets are less meaningful for adaptive programmes. If the donor still requires some of the traditional accountability instruments, a way to work with these might be either to agree on some basic process indicators (# of beneficiaries reached, # of trainings held) and complement them with “open ended change ” indicators – where you commit to report a number of stories of change to the donor without having to specify what those changes might be yet. In the end, the donor is just as interested in learning how to build better, more impactful development interventions as we are, so they will appreciate an open conversation on this.

3 – Simplicity is key

In line with considering utility and utilisation, putting the users of MEL tools at the centre of their development also ensure that they will be employed and understood.

All too often, MEL for adaptive management is thought to require complex and innovate approaches, current trends being investing in either outcome mapping, harvesting, and most significant change, or a combination thereof. One has to understand that neither of these tools are completely new though, they are often taken from other disciplines and build on existing methodologies. There is no need for complexity aware monitoring to be unnecessarily complex – in fact, since monitoring information will have to be available more frequently due to shorter feedback loops in adaptive programmes, we should indeed try to avoid using too advanced approaches.

It is my experience that programme staff themselves often have the best ideas when it comes to developing tools to access/gather relevant data on the learning questions – it is my responsibility as MEL expert that we develop tools that align with basic ethical and research standards. Frequent checks on the quality of the data gathered then often becomes an additional opportunity to build capacity on identifying outcomes and tailoring monitoring tools further to make them even more fit for purpose.

4 – All together now

As mentioned above, the MEL data collected is best analysed by those who have been involved in the programme and are familiar with the context in which it operates. This can be achieved through participatory exercises such as mapping and story telling which bring together programme staff and beneficiaries to develop a common understanding of what the data tells us and what that means for the programme, the community, etc. The role of the MEL expert here is just to facilitate and guide the process but not to interfere her-/himself.

An additional benefit of bringing in programme beneficiaries to validate and interpret the information collected is that it provides a degree of direct accountability which will strengthen community ownership of programme outcomes.

5 – Spaces for learning

Last but not least, all the monitoring data in the world is of no use if it is not being analysed in its context and considered by those who make decisions about the programme.

For this, there is no need to add additional layers of meetings – most programmes already have plenty of so-called learning spaces that could be used more effectively for this purpose. Simply ask yourself – where do we currently take decisions about the programme, who is (should) be involved and what information do we need to come to a decision on this. A learning space can be anything from a weekly staff meeting, a field visit, a staff retreat as well as a formal evaluative exercise. Recording the outcome of any decisions taken as an outcome of reflection is key for institutional memory and future reporting.

2.png

In fact, the absence of pre-defined outcomes and targets does not mean that adaptive programmes have less reporting to do. There is often a need for more in-depth recording of the process for coming to a decision about adaptation to be able to explain this to the donor later on – and to draw out lessons on what triggers adaptation for the wider development community. More on this in my next blog.

 

Fred%2Bphoto%2Bv2.jpg
By: Frederike EngelandMEL Advisor based in Yangon working with a variety of organisations from civil society to Parliament. Fred is a peace and conflict studies graduate and former International Alert staff.

Leave a Reply

Your email address will not be published. Required fields are marked *