Don't be a Golgafrinchan telephone sanitiser
VfM assessment matters - but only if we make it useful
Making good decisions with limited resources is important.
Most resources are scarce, in the sense that there aren’t enough to satisfy unlimited human wants and needs. Inspiration may be a virtually limitless resource, but there are more good ideas (and not so good ones) than we can afford to implement with the time, money, knowledge, effort, and natural resources available to us. Deciding which promising ideas to invest in, tracking their progress and value, knowing when and how to scale, adapt or stop them, are imperative.
As I’ve suggested before, and only slightly in jest, good resource use may be the alpha evaluation question.
And yet, how often do we treat VfM as a compliance exercise?
Too often, VfM assessment is treated as an obligation to satisfy funders rather than a practice to critically evaluate and improve policies and programs. The focus on compliance leads to data gathering and reporting that, while meeting basic requirements, fails to provide meaningful insights.
When we do that, we not only leave significant value on the table - but also risk suffering the same fate as the telephone sanitisers of Golgafrincham.
Who were they? Well, according to intergalactic historian Douglas Adams, Golgafrincham was a planet whose elites, tired of overpopulation, concocted a cunning plan to rid themselves of what they deemed the useless one-third of society. The Golgafrinchans decided to build three Ark ships. Ark A was reserved for leaders and scientists. Ark C was for people with useful occupations - the doers who actually built things and made things happen. Ark B was the vessel for people with ‘busy work’ occupations, like telephone sanitisers and jingle writers, who consumed Golgafrinchan resources while delivering little perceived value in return.
To bring their plan to fruition, the Golgafrinchans spun elaborate tales of impending doom. They convinced the Ark B passengers that their planet was on the brink of destruction, threatened by everything from crashing moons to enormous mutant star goats. The unsuspecting middle-class folk believed these wild stories and boarded Ark B, unaware that they were being sent off into space while the rest of the population would remain behind to live happy lives.
That doesn’t sound great for the telephone sanitisers. But surely VfM assessment is more useful than that?
As it turns out, both VfM assessment and telephone sanitising are essential. But they have to be done right.
Right now, VfM assessment is too fragmented, separated from program design, monitoring and evaluation processes and from big-picture policy development. Consequently we miss opportunities to improve decision-making, planning and review.
For example, VfM is often treated as something separate from evaluation and is limited to a technocratic, managerial exercise devoid of evaluative judgements. Moreover, there’s often a disconnect between ex-ante (future facing) business case appraisal to decide what policies and programs to fund, and ex-post (looking back) monitoring and evaluation to learn, improve, and assess impact. They’re treated as separate activities rather than being integrated as components of a coherent program lifecycle. Additionally, VfM assessments (and evaluations more generally) are usually done at the individual program level rather than considering the collective VfM of multiple programs, policies or whole systems.
These problems are compounded by ambiguity about what VfM (good resource use) even means. For example, the term “value for money” is often used as if it’s synonymous with cost-benefit analysis (CBA) - an economic method of evaluation - or return on investment (ROI) - an indicator produced by CBA. In some circles, CBA is regarded as the ‘gold standard’ for determining VfM. However, no single method or metric provides a comprehensive answer to a VfM question. Although a valuable decision-support tool, relying on CBA exclusively can lead to tunnel vision, overlooking crucial intangible factors and additional considerations beyond monetised benefits and costs.
In other settings, VfM is defined in terms of multiple criteria (e.g., equity, efficiency, sustainability, etc) which may include, but are not limited to the cost-benefit test. This is an advance on the single-criterion focus of CBA, though it still runs the risk of falling short in terms of validity and usefulness if criteria are set in a top-down manner without seeking to understand what matters to people impacted, and when VfM assessments emphasise quantifiable indicators without evaluative thinking.
VfM assessments often come from a place of defensiveness, setting out to “prove” value to support a business case or protect continuity of funding. Additionally, a compliance mindset results in reports that may satisfy funders’ administrative requirements but lack clear insights about performance and opportunities to improve. This mindset encourages a risk-averse culture, where sticking to plans feels safer than innovating or learning. As a result, staff may avoid deviating from planned outputs to avoid appearing inefficient. This rigidity hinders the adaptation and learning necessary to improve VfM.
How can we do better? Tell me quick - I don’t want to end up on Ark B!
First, business cases, monitoring, evaluation, and VfM need to be joined up. Ex-ante appraisals should routinely include explicit evaluative judgements balancing benefit-cost ratios with wider considerations and should detail planned monitoring and evaluation arrangements integrated with program design, including data collection systems built in from the outset. Then we can track VfM from implementation through to impact and benefits realisation to see whether the investment delivered on its value proposition.
Next, VfM needs to be considered systemically and at multiple levels - from programs to portfolios, whole departments/ organisations, and whole-of-government policy settings. Then resource allocation decisions can reflect the complexities of whole systems instead of siloed views of component parts.
To realise the real potential of VfM assessment, we need to treat it as a dynamic process involving human judgement, reflection, innovation and improvement. Shifting from technocratic analysis to blending technocracy with democracy better reflects how resource allocation decisions actually get made and makes visible the values of affected groups. Shifting from compliance to open reflection means asking critical questions - for example: What are we learning from current resource allocations, actions and impacts? Where can we improve? Could reallocating resources yield better results?
We also need to make better use of qualitative data, such as case studies, feedback from stakeholders and rights-holders, and insights from collective reflection processes, to complement economic and other quantitative metrics. These qualitative insights can reveal aspects of performance and value that numbers alone may miss, providing a fuller picture of whether a program is having a genuine impact.
A learning-focused approach fosters flexibility. When setbacks are seen as learning opportunities, organisations can adjust strategies in real time. This adaptive mindset should ultimately strengthen trust with funders and stakeholders through more impactful programming. However, it’s a two-way street. Funders and funding arrangements need to create an environment in which it is safe to have honest and open conversations about VfM.
What are some practical steps we can take now, to start making VfM assessments more useful?
An important lever for better VfM assessment is to bring in a logic that sits at the heart of what it means to evaluate, yet ironically isn’t always built into evaluations in an explicit and intentional way: interpreting evidence through the lens of explicit criteria and standards. Implementing this logic effectively turns evaluation into a collaborative, social process, bringing stakeholders to the table to share and hear different perspectives and deliberate on what is valuable and valued about a policy or program.
The Value for Investment approach sets out a series of steps to implement this logic, providing clear answers to VfM questions, with succinct reports underpinned by sound evidence and transparent reasoning. By following the process with stakeholders, we contribute not only to a better VfM assessment of an individual program, but also to building an evaluative culture - for example: thinking beyond measurement to mixed methods; thinking beyond methods to reasoning; thinking beyond proving to improving.
This approach can be (and is being) applied at multiple levels from programs to whole organisations and systems. The process of evaluative reasoning remains the same while the criteria, standards, and types of evidence are tailored to context.
The telephone sanitisers had the last laugh (though it is unclear whether they were alive to enjoy it).
Golgafrinchans were ultimately wiped out by a virulent disease from a dirty telephone. In other words, it didn’t end well for anyone. Let’s not take that chance with VfM assessment.
Don’t be like the telephone sanitisers of Golgafrincham. Let’s not do VfM assessments that are perceived as low-value or irrelevant. Let’s make VfM assessment more than a tick-box exercise. Let’s make it more inclusive, valid, credible, and useful to inform real-world resource allocation decisions. That’s what I’m working towards here on Substack, on my resources page, through training and other capability-building.