Principles and methods to help evaluators answer value for money questions
(that we hope you will find challenging)
Hi all! John Gargani and I have a new article published in the journal Evaluation, building on our presentation at the European Evaluation Society conference in Copenhagen, June 2022. This post gives you a quick overview.
The following summary is extracted from the paper and includes verbatim or slightly modified passages. Please cite the original source paper:
Gargani, J., King, J. (2023). Principles and methods to advance value for money. Evaluation, 1-19. DOI: 10.1177/13563890231221526
First, we define value for money (VfM), located at the intersection of evaluation and economics.
The concept of VfM is grounded in a common-sense notion of fairness - getting what you pay for. While straightforward in a well-functioning market, it becomes more complicated in social investments where the buyer (a government, philanthropist, etc) purchases impacts for the benefit of others. We argue that VfM determinations should play a central role in allocating resources for the benefit of diverse communities. We see opportunities to enhance VfM assessment.
We define VfM as good resource use, placing VfM at the intersection of evaluation (judging what is good) and economics (studying resource use). Judging good resource use requires a holistic assessment of value, using criteria and standards representing diverse stakeholder perspectives.
Value is more than money. It includes merit, worth, significance, quality, importance, etc. VfM assessments often focus narrowly on economic or monetary interpretations, leading to misconceptions that VfM and return on investment (ROI) are synonymous. Our definition encompasses monetary and non-monetary resources and value. Evaluators should incorporate as many different conceptions of value as needed to represent the diversity of perspectives held by stakeholders.
Impact is central to VfM. While not the only consideration, it is typically crucial because it would be difficult to justify that resources were used well in the absence of positive impacts. The value stakeholders place on policies and programs depends on which impacts evaluators describe and how well we describe them.
VfM spans the entire causal chain from resources that fuel organisational actions, producing impacts (including outputs and outcomes) that people value. When answering VfM questions we must consider the whole span to ascertain the relationship between resource use and value creation.
Next, we argue for a holistic assessment of VfM using tools that evaluators already have, like rubrics.
A holistic assessment involves comprehensive criteria, standards, and evidence representing diverse perspectives. Evaluators can bring these elements together to address VfM questions using the general logic of evaluation. There are many ways to put the general logic of evaluation into practice, and rubrics are one. Rubrics, being relatively simple, systematic, transparent, easily revised, and inclusive, enable evaluators to judge resource use holistically. A VfM rubric may include economic and non-economic criteria and standards, tailored to the context.
We introduce three principles that further align VfM with evaluation:
Value depends on the credibility of estimates. People should place less value on an impact as the credibility of its estimate decreases. This means evaluators should adjust value for risk. The uncomfortable truth is that we usually don’t. Instead, we tend to treat evidence as if it is fully and equally informative. Consequently, we may be consistently overestimating value.
Things don’t have value; people place value on things. An evaluator can’t ascertain the value of an impact just by studying the impact, because that’s not where the value is located. Value comes from people (we address the philosophical argument that some things, like nature, have intrinsic value). So evaluators must learn from people, especially those affected, how much and what type of value they place on impacts.
People value the same things differently. That variation in value perspectives is information. Evaluators have a duty to ascertain, understand, and report this information. When value perspectives conflict, we should help reconcile them, in keeping with program evaluation standards.
Together, these principles suggest evaluators should arrive at multiple, possibly conflicting conclusions that represent diverse value perspectives.
We demonstrate how these principles may be enacted using a VfM rubric.
We provide an example to illustrate how context-specific criteria and standards may be applied to answer a VfM question. The example demonstrates the importance of thinking more broadly than economic considerations alone. We can’t give it all away here - check it out in the paper!
Building on this example, we illustrate the three principles: value depends on the credibility of estimates; people place value on things; and people value things differently. We show how rubrics can help evaluators in considering the perspectives of different groups (“multiple accounts”) in four ways:
Stakeholders apply different standards
Stakeholders seek different evidence
Stakeholders use different criteria
Stakeholders apply different importance weights.
Within each stakeholder group/account, variation may be less than across all stakeholders, making synthesis more tractable. We can go further (if needed) and look at within-group variation in value perspectives too.
Multiple, sometimes conflicting, conclusions offer valuable insights for resource allocation. We argue that where feasible, this should be the norm in evaluation because multiple conclusions, especially those that contradict or conflict, are what evaluation standards demand, and what decision-makers need to allocate resources well. The fact that this isn’t routinely done highlights an opportunity to improve evaluations of resource use.
Check out the full paper:
Gargani, J., King, J. (2023). Principles and methods to advance value for money. Evaluation, 1-19. DOI: 10.1177/13563890231221526
Open-access preprint version here.






