what is MEL?
MEL is short for monitoring, evaluation, and learning. The Studio defines MEL as:
“the natural process of watching how things are going, asking if they're working and why, and then changing our behavior to continuously improve.”
Humans, many wild animals, and plants engage in MEL-like processes to survive. Thus, we stress that MEL is natural. In humans, it shows up when we decide what policies to pass, what programs to fund, and even who to date.
On that note, to keep things interesting, we'll explain MEL using a romantic relationship, with examples from a hypothetical program that aims to improve academic outcomes. Let's get started!
monitoring
Monitoring is an ongoing process we use to make sure things are progressing well...and spot if things are not going well. Spotting trends early makes it possible to keep moving in the right direction and to course correct before things get out of hand.
In a relationship, monitoring would be keeping an eye on your partner's behavior. Are they responsive to texts? Do they appear to be in a good mood? It's the day to day observations we collect.
To monitor an education program, for example, we might regularly collect data such as:

evaluation
Unlike monitoring, evaluation only happens at key times, usually before a program starts, around the time it ends, and every 1, 5, 10, or 15 years in between. We ask specific questions about what has changed, why, and what should be done next.
Think of evaluation like sitting down with your partner once a year and formally examining whether there have been positive or negative changes in your relationship, whether you are more or less happy than you were this time last year, and how you can grow stronger together.
On an education program, an evaluation might ask:
learning
As you may have noticed looking at Cardi B and Offset's old relationship, learning is the part people sometimes struggle with.
In fact, prior to 2010, most people just said M&E. The "L" is a relatively new addition. It is important to stress that "learning" is not just gaining new information. It is also about taking action.
In a relationship, learning would be the combined act of discovering your partner likes flowers, then surprising them with a bouquet once a month to keep the spark alive.
For an education program, learning could be finding out that 40% of students spend less than one hour a week reading at home, then deciding to create a "Book Blitz" that encourages children to read more.

dealing with the unavoidable
Even though we try not to use technical language as much as possible, it is very important that you are familiar with common MEL terms. We've chosen a few that are hard to escape. You very likely see these words in program documentation and they will come up in our design sessions.

Alright, those were the broad terms that relate to programs as a whole (including MEL work). The remaining terms are slightly more technical and specifically relate to evaluation and research:

That it! If you develop a good understanding of these terms, you'll be in a very good position. Don't worry if you have read them over a few times. Novice evaluators and program staff conflate these terms regularly. Rest assured, if you have questions, we're here to support you.
setting the record straight
Now that you know what MEL is and are starting to develop a good understanding of terms, the last thing to do is address some damaging misconceptions:
Misconception: Evaluations are audits
Truth: Auditors ask if people are following the rules. Evaluators ask what has been the effect of someone's work, and what can be done to improve. We are not the same.


Misconception: Evaluation is a form of verification
Truth: Evaluations don't set out to prove things, including success. They are designed to get an honest picture of what happened. Sometimes evaluations confirm our speculation that things are working; however, they may also show where improvements are needed. Both outcomes are useful.
Misconception: Evaluations need to show causation to be worthwhile
Truth: Assessing causation allows you to say something like, "this program caused in-school suspensions to decrease by 5%."
The reality is that evaluations that unlock these types of claims are very expensive, and not suited for every situation.
Programs (and people) rarely exist in a vacuum. So most outcomes are the result of numerous factors (other programs, changes in the environment, etc.). Therefore, it is perfectly legitimate, and often more intelligent, to assess contribution. That is, whether and how a program contributed to change, not whether it is solely responsible for it.

Misconception: Numbers are more credible than stories
Truth: Numbers tell us what is happening and stories tell us why. We need both to make improvements.
It is best practice to collect quantitative data and qualitative data, and to triangulate findings using additional sources, such as publicly available data.
On this point, more data does not mean more credibility. It is best to collect only what is necessary to answer the evaluation questions.
We address these misconceptions (and call them damaging) because they have severely altered some people's relationships with evaluation and evaluators. They create hostility, distrust, and unrealistic expectations. Evaluators exist to be trusted partners in learning, helping communities and organizations ask good questions, gather credible evidence, interpret it responsibly, and use it to take action.
© Eval Design Studio 2026