Our exploration of the 5Es continues. Ex-post (retrospective) value for money (VfM) assessments often examine a program in terms of five criteria: economy, efficiency, effectiveness, cost-effectiveness and equity.
I've argued that the 5Es aren't the definitive set of VfM criteria - there's no such thing. But they're good conversation starters because they cover a program's value chain, spanning resources that fuel organisational actions that contribute to impacts that people value.
I've argued that generic definitions of the 5Es won’t do - we need context-specific criteria, defined in collaboration with stakeholders, representing negotiated and agreed aspects of VfM to focus on.
In this series of short reads I'm sharing some concepts that can help in defining context-specific interpretations of each E.
In the last couple of weeks we've looked at equity and cost-effectiveness.
Now let's talk about effectiveness.
Effectiveness addresses that part of the value chain where we see if an intervention’s efforts had some impact. This is distinct from its value to people, which (as discussed last week) is addressed at the cost-effectiveness level.
When defining effectiveness criteria for our program, it can help to ask:
🤔 What outcomes or impacts should we pay attention to that would tell us if the investment is on track to create value?
Outcomes and impacts are defined in various different ways.
For today's purpose I'm using both outcomes and impacts as equivalent terms for changes in people, places and things that are caused by a program's actions, or to which the actions contribute. So, when we're assessing outcomes or impacts, we need to investigate not just what changed, but what caused or contributed to the change.
These may include real changes in people’s lives, like health status or educational attainment, or footprints of progress toward the bigger outcomes, like changes in knowledge, skills, or behaviour.
If your VfM assessment is part of a wider monitoring, evaluation and learning program, then there may also be an outcomes evaluation underway. In that case, it’s important to coordinate workstreams to ensure the VfM assessment and outcomes evaluation are conceptually coherent and collect the right data to serve both purposes.
Keep outcomes distinct from outputs.
Outputs are products or services delivered through the program’s actions and substantively within its control, whereas outcomes are consequences of the program and involve some action or change in people, places or things external to the program.
Outputs (e.g., “training was delivered”) belong at efficiency level in the 5Es framework. Outcomes (e.g., “the training improved performance”) belong at effectiveness level.
Outputs are sometimes misclassified as outcomes, perhaps because outputs happen sooner and it’s often easier to measure and report what the program delivered than how it made a difference. Conversely, outcomes are sometimes misclassified as outputs (I don’t know why, but I’ve mainly seen this in logframes). If an output or outcome has been misclassified in an existing theory of change or logframe, I reclassify it to the part of the VfM framework where it rightly belongs.
Causal questions and evaluative questions are separate and distinct, though sometimes conflated.
Both involve making warranted judgements based on evidence and logical reasoning, but causal questions focus on why something happened (and how, for whom, in what circumstances) whereas evaluative questions focus on how good something is.
To assess effectiveness, we need both.
We have multiple options at our disposal for tackling causal questions - quantitative and qualitative, experimental, quasi- and non-experimental. All options (and combinations thereof) are on the table as far as I’m concerned. Select according to context. Horses for courses. No gold standards, except sound reasoning.
We also have multiple options for tackling the evaluative part of the assessment - for example, determining whether the outcomes and impacts meet, exceed or fall short of reasonable expectations. In this Substack series I’ve explored multiple approaches to evaluative reasoning, with a primary focus on rubrics and mixed reasoning.
In VfM assessment, intended and unintended outcomes matter.
VfM assessments often focus on whether a program is achieving its intended outcomes. From this perspective, the assessment of outcomes should align with a theory of change or logic model. Often, intended outcomes are identified by program architects. However, we should also seek to understand and evaluate outcomes through the lens of recipients' needs and expectations. What's more, some outcomes may be unintended and could be positive or negative, with implications for the overall value of the investment. Different people can experience different outcomes, so it may be important to consider for whom, when and why a program is effective.
Coming up next: Efficiency
Here’s the original article upon which this article builds, including all 5Es and even some bonus criteria that start with other letters👇