what is MEL?

MEL is short for monitoring, evaluation, and learning. The Studio defines MEL as:


the natural process of watching how things are going, asking if they're working and why, and then changing our behavior to continuously improve.”


Humans, many wild animals, and plants engage in MEL-like processes to survive. Thus, we stress that MEL is natural. In humans, it shows up when we decide what policies to pass, what programs to fund, and even who to date.

On that note, to keep things interesting, we'll explain MEL using a romantic relationship, with examples from a hypothetical program that aims to improve academic outcomes. Let's get started!

monitoring

Monitoring is an ongoing process we use to make sure things are progressing well...and spot if things are not going well. Spotting trends early makes it possible to keep moving in the right direction and to course correct before things get out of hand.


In a relationship, monitoring would be keeping an eye on your partner's behavior. Are they responsive to texts? Do they appear to be in a good mood? It's the day to day observations we collect.


To monitor an education program, for example, we might regularly collect data such as:

  • How many books have been distributed.
  • How many teachers have been hired.


evaluation

Unlike monitoring, evaluation only happens at key times, usually before a program starts, around the time it ends, and every 1, 5, 10, or 15 years in between. We ask specific questions about what has changed, why, and what should be done next.


Think of evaluation like sitting down with your partner once a year and formally examining whether there have been positive or negative changes in your relationship, whether you are more or less happy than you were this time last year, and how you can grow stronger together.


On an education program, an evaluation might ask:

  • Has this program contributed to improved student test scores?
  • Have there been any unintended consequences of this program (positive or negative)?

learning

As you may have noticed looking at Cardi B and Offset's old relationship, learning is the part people sometimes struggle with.


In fact, prior to 2010, most people just said M&E. The "L" is a relatively new addition. It is important to stress that "learning" is not just gaining new information. It is also about taking action.

In a relationship, learning would be the combined act of discovering your partner likes flowers, then surprising them with a bouquet once a month to keep the spark alive.


For an education program, learning could be finding out that 40% of students spend less than one hour a week reading at home, then deciding to create a "Book Blitz" that encourages children to read more.

dealing with the unavoidable

Even though we try not to use technical language as much as possible, it is very important that you are familiar with common MEL terms. We've chosen a few that are hard to escape. You very likely see these words in program documentation and they will come up in our design sessions.

  • Intervention / Project / Program: a carefully planned effort (or set of efforts) designed to achieve a goal

  • Results chain: the logical progression from inputs to impact:
  • Inputs: the resources put into a project (e.g. money or staff time)
  • Activities: the actions taken as part of the project (e.g. building schools or training teachers)
  • Outputs: the immediate product of an activity (e.g. number of schools built or teachers trained)
  • Outcomes: the short-term result of an activity (e.g. increased student enrollment or improved understanding of best teaching practices) 
  • Impact: the long-term, sustained result of one or more projects (e.g. increased income due to higher academic attainment)


  • Theory of Change / Program logic: an articulation of how a project is expected to create change

  • Assumptions: the conditions that must hold true for the program logic to work (e.g. assuming that teachers will retain training material instead of forgetting everything as soon as the session is over)


  • Indicators: signs that something is happening, such as desired outputs/outcomes are being realized or that assumptions are being met

  • Reach: the people, animals, or things (e.g. schools) that have been directly or indirectly affected by a program


  • Informed consent: the ongoing process of making sure someone has vital information and actively agrees to participate in something (a program, evaluation, experiment etc.)


  • Do No Harm: the principle that evaluation (or other activities) should not increase risk or harm for participants or communities. When there is tension between learning and safety, safety wins.

Alright, those were the broad terms that relate to programs as a whole (including MEL work). The remaining terms are slightly more technical and specifically relate to evaluation and research:

  • Evaluation purpose: the reason we’re evaluating, including what decisions it will inform

  • Primary users: the people who will use the evaluation findings (e.g. mothers, local leaders, organizations, government officials, etc.)

  • Evaluation questions: the specific questions the evaluation will answer (e.g. has this program contributed to a measurable increase in the percent of children passing state exams?)

  • Methodology / methods: the techniques used to collect, analyze, and interpret data and answer evaluation questions (there are countless methods; we'll support you in choosing the ones appropriate for this evaluation)

  • Triangulation: using multiple methods to validate a result (a best practice in evaluation)

 

  • Quantitative data: numerical data, or things that can be calculated (e.g., height, weight)


  • Qualitative data: non-numerical data, like words or actions (e.g., Google reviews)

  • Limitations: what the data can’t confidently tell us (these are shortcomings that must be weighed when choosing methods)

  • Bias: a systematic error that skews data, leading to inaccurate conclusions (note: this is not the same as prejudice. There are numerous biases in research and we will support you to avoid and manage them)

  • Baseline, midterm, and end line: evaluations that coincide with the (approximate) beginning, middle, and end/renewal of a program, respectively

That it! If you develop a good understanding of these terms, you'll be in a very good position. Don't worry if you have read them over a few times. Novice evaluators and program staff conflate these terms regularly. Rest assured, if you have questions, we're here to support you.

setting the record straight

Now that you know what MEL is and are starting to develop a good understanding of terms, the last thing to do is address some damaging misconceptions:

Misconception: Evaluations are audits

Truth: Auditors ask if people are following the rules. Evaluators ask what has been the effect of someone's work, and what can be done to improve. We are not the same.

Misconception: Evaluation is a form of verification

Truth: Evaluations don't set out to prove things, including success. They are designed to get an honest picture of what happened. Sometimes evaluations confirm our speculation that things are working; however, they may also show where improvements are needed. Both outcomes are useful.

Misconception: Evaluations need to show causation to be worthwhile

Truth: Assessing causation allows you to say something like, "this program caused in-school suspensions to decrease by 5%."


The reality is that evaluations that unlock these types of claims are very expensive, and not suited for every situation.


Programs (and people) rarely exist in a vacuum. So most outcomes are the result of numerous factors (other programs, changes in the environment, etc.). Therefore, it is perfectly legitimate, and often more intelligent, to assess contribution. That is, whether and how a program contributed to change, not whether it is solely responsible for it.

Misconception: Numbers are more credible than stories

Truth: Numbers tell us what is happening and stories tell us why. We need both to make improvements.


It is best practice to collect quantitative data and qualitative data, and to triangulate findings using additional sources, such as publicly available data.


On this point, more data does not mean more credibility. It is best to collect only what is necessary to answer the evaluation questions.


We address these misconceptions (and call them damaging) because they have severely altered some people's relationships with evaluation and evaluators. They create hostility, distrust, and unrealistic expectations. Evaluators exist to be trusted partners in learning, helping communities and organizations ask good questions, gather credible evidence, interpret it responsibly, and use it to take action.

© Eval Design Studio 2026