Integrating value for money within an evaluation
Using an intuitive stepped approach as an integration tool
Traditional approaches to value-for-money (VfM) assessment often treat VfM as separate from other evaluation activities, such as process and impact evaluation.1 In some settings, VfM is treated (mistakenly, in my view) as if it’s synonymous with economic evaluation; other times it’s not even seen as evaluation but more as a financial management or administrative function. Either way, this frequently means VfM and evaluation are handled by different teams, working on different activities, writing different reports, on different time frames.
However, VfM assessment is evaluation - and it’s entangled with other evaluation questions and criteria in multiple ways. If we don’t coordinate them, we miss opportunities to provide a more seamless evaluation with linked activities, coherent frameworks, and integrated findings.
The OECD DAC evaluation criteria are one example of a set of criteria that overlap with VfM. Even the most superficial glance reveals that there are two DAC criteria in common with the 5Es - efficiency, and effectiveness. So if an evaluation is commissioned to address both sets of criteria, there will be some boundary issues to address.
However, on deeper inspection the boundary issues are more complex than just efficiency and effectiveness. The DAC criteria intersect with the 5Es in multiple ways - for example, the 5Es definition of effectiveness overlaps with the OECD DAC definitions of both effectiveness and impact, while the OECD DAC definition of efficiency intersects with three of the 5Es: economy, efficiency, and cost-effectiveness. Moreover, I’d go so far as to argue that all of the OECD DAC criteria are relevant to questions about good resource use and value creation (i.e., they are all potential VfM criteria).
Untangling the dog’s breakfast - Integrated evaluation design principles
Life is complex. I don’t make the rules - that’s just the way it is. But it’s sometimes as if evaluation and research start out with an assumption that life ought to be simple, and treat the complexity as an inconvenience. Then we end up with simplistic reductive evaluations, siloed evaluation activities, and uncoordinated reports that look like they came from different mothers. But it doesn’t have to be that way.
When we’re finding a path through the complexities of coordinating VfM with process, impact and other evaluations, I suggest the following design principles:
Integration: Develop evaluation frameworks and plans together - with the expectation that the boundaries between VfM and other components may not be clean-cut but can still be clarified.
Pragmatism: Draw sensible boundaries between different evaluation activities based on needs and context.
Coherence: Develop common foundational elements that will apply across the whole evaluation, such as a unifying theory of change, value proposition, and a shared language for key concepts.
Complementarity: Minimise repetition across different evaluation activities and deliverables (e.g., a VfM report could cross-reference an impact evaluation report to avoid duplicating content).
Coordination: Identify common elements and dependencies between evaluation activities (e.g., the design and timing of different fieldwork and reporting tasks) to provide an efficient process and minimise respondent burden. For example, if there’s going to be a survey, can it be designed meet all evaluation needs including VfM?
Reasoning: Take a consistent approach to evaluative reasoning, with a transparent framework across the whole evaluation to support judgements on VfM and other criteria.
What this means at each step of the evaluation
To design and implement a cohesive set of evaluations, we can use the following sequence of steps. Each step builds on the last. The sequence matters because it ensures the evaluation starts by defining what matters and then moves on to consider what evidence is needed. It distinguishes reasoning (how we make evaluative judgements - steps 2, 3 and 7) from methods (how we gather and analyse evidence - steps 4-6) - though, as I’ve mentioned before, real-world evaluations may be messier and more iterative while still observing the underlying logic of the process.2
The key is to design and implement all evaluation frameworks and plans together, within an overarching, unifying process.
Here’s what you can do at each step of the process to design and deliver a coherent, well-coordinated set of evaluations.
Step 1 - Understand the program
Step 1 focuses on understanding the program - for example, its objectives, history, context, stakeholders, and the purpose, uses and users of the evaluation. This foundational work should be carried out with representation from the entire evaluation team (e.g., if there are separate evaluation workstreams for VfM and other components) as well as key stakeholders. The aim is to establish shared foundational elements, such as a theory of change and value proposition, that inform all work streams.
At this step, it is crucial to engage the commissioner and stakeholders to define clear, big-picture evaluation questions that will guide the evaluation.3 This involves taking stock of all evaluation objectives and questions (for example, those related to process, impact, VfM, etc) and developing them into a coherent, integrated set. The goal is to ensure clear boundaries between different evaluation components, while also coordinating the overall design so that the work and reporting are aligned.
For example, if the VfM report will draw on evidence from an impact evaluation, plan the sequencing of work so that the impact evaluation is completed a few weeks ahead of the VfM assessment. This approach ensures that evaluation reports are coordinated and that the evidence base is robust across all areas of inquiry.
Step 2 - Criteria
Step 2 is identifying what aspects of the program matter enough to focus on in the evaluation (i.e., criteria of quality, performance, value, etc) and developing context-specific definitions for each of those aspects, in collaboration with stakeholders.
During this step, a coherent set of criteria should be defined that align with and reflect the boundaries between different evaluation objectives and questions.
For example, and only to illustrate, if there are process, impact, and VfM work streams, it could be decided that:
The process evaluation will focus on the nature and quality of organisational actions (with criteria such as implementation fidelity, delivery success, stakeholder satisfaction, risk management, etc), while the VfM evaluation will consider stewardship of resources and productive ways of working that maximise outputs for a given level of investment (with criteria for economy, relational efficiency, dynamic efficiency, allocative efficiency, and technical efficiency); and
The impact evaluation will focus on real changes in people, places and things, caused by the organisational actions, while the VfM evaluation will consider the value that people and groups place on those outcomes and the relationship between resources invested and value created, bearing in mind opportunity cost (possibly including economic evaluation).
This division of criteria would provide clear boundaries between VfM and other work streams.
Step 3 - Standards
Step 3 is to define standards, levels of quality, performance or value such as excellent, good, adequate, and poor. When integrating multiple evaluation components, a consistent approach should be taken across the whole evaluation.
For example, it could be decided to use four levels with the following generic definitions:
Excellent: exceeding expectations
Good: generally meeting reasonable expectations
Adequate: meeting minimum requirements and showing acceptable progress
Poor: falling short of minimum requirements or acceptable progress.
Sometimes a generic set of standards is enough. Other times these standards can guide alignment or ‘calibration’ of bespoke definitions for each criterion. These definitions are context-specific and developed with stakeholders.
The criteria and standards are summarised in rubrics that address all components of the evalution in a coherent fashion, to answer the evaluation questions.
Step 4 - Evidence needed
Step 4 is to determine, on the basis of the agreed criteria and standards, what evidence is needed and will be credible to enable valid evaluative judgements to be made. This, in turn, influences decisions about what mix of methods will be appropriate to gather and analyse the evidence needed.
When integrating evaluative components, there are opportunities to select and design methods and tools that serve all evaluation purposes collectively. For example, if there are going to be case studies, can they be designed to serve multiple evaluation and VfM needs?
At this stage, you could also prepare an evaluation matrix, setting out the methods and data sources that will be used to address each evaluation question and criterion, laying the groundwork for a well-coordinated evaluation.
Steps 5 and 6 - Gather and analyse evidence
Steps 5 and 6 involve gathering and analysing the evidence. If well-planned and coordinated, these processes should meet the needs of all evaluation components in an efficient and well-orchestrated manner - maximising the return on evaluation effort and minimising respondent burden. Different streams of evidence (e.g., survey, interviews, RCT, economic analysis) can be analysed in parallel by different team members, with coordinated time frames.
Step 7 - Synthesis & judgement
Step 7 is the synthesis step, where relevant streams of evidence are considered together to reach evaluative judgements for each criterion and collectively. This process is guided by the criteria and standards and can involve deliberation by the evaluation team and stakeholders.
If the evaluation design process above drew clear boundaries between different evaluation questions and criteria, it will now pay back with a well-structured approach to synthesis and evaluative judgements across all evaluation components.
Step 8 - Reporting
Step 8 is the reporting step and the final stage where the integrated design process pays back. At this stage you should be well set up to produce a coherent and well-coordinated set of reports (or one integrated report), providing findings on a clear and consistent basis that’s easy for readers to follow, gets straight to the point and answers the evaluation questions, presenting explicit judgements backed by evidence and logical reasoning.
Bottom line
If you want to integrate VfM with other elements of an evaluation, develop them as a unified whole - with coherence and coordination across the design process, team members involved, evaluation frameworks and plans, evaluation activities, and reports. The 8-step Value for Investment process provides a road map for doing just that.
Resource
This post builds on Oxford Policy Management’s guide to assessing VfM, which can be downloaded for free.

Acknowledgement
Many thanks to Patrick Ward for helpful peer review. The views expressed here, and any errors or omissions, are my own.
Upcoming Value for Investment training workshops
Aotearoa New Zealand Evaluation Association (ANZEA) online.
Australian Evaluation Society (AES): 16 September, Canberra. Be quick - limited places!
UK Evaluation Society, 24-25 September, online.
Thanks for reading!
For example, the current edition of the UK Magenta Book explicitly sets out a three-way typology of evaluations - process, impact, and VfM. This triad is commonly referenced in evaluation terms of reference. However, I think this classification of evaluation into three ‘types’ is overly restrictive. In practice, evaluations often have multiple objectives, and many evaluation questions and criteria don’t fit neatly within the categories of process, impact, and VfM. Instead, evaluations should have clearly defined objectives and questions, with the evaluation design tailored to address them directly.
For readers who are new to this series, I regard the defining feature of evaluation as judging the value of something. At its core, evaluation is about weighing evidence about programs and policies (e.g., their quality, success, and what matters to people impacted) and making considered, transparent judgements about how well things are going, what actions to take next, and so on. The 8-step process outlined here is designed to support this evaluative reasoning process with appropriate stakeholder participation, including defining explicit criteria (what matters) and standards (what good looks like) before selecting evidence sources and methods, gathering and analysing evidence, and synthesising the evidence through the lens of the criteria and standards to transparently judge value.
Once a first draft of evaluation questions is developed, which might sometimes include 20 or more, it is usually possible to reorganise these into a smaller set of around 2-5 key evaluation questions (KEQs). Much of the detail from the initial draft questions can be reframed as criteria that define the focus and scope for addressing each KEQ. This process creates a hierarchical structure in which the big-picture KEQs provide clear direction for the evaluation, and the criteria serve as detailed lenses through which each KEQ is addressed. For example, if a KEQ asks, “To what extent does the program represent good value for the resources invested?” the related criteria often include economy, efficiency, effectiveness, cost-effectiveness, and equity. This also provides logical structure for reporting findings, e.g., there could be a chapter heading based on the KEQ, with sub-headings for each of the criteria.