Social return on investment (SROI) and cost-benefit analysis (CBA) can be used to support sound reasoning and good decisions. But that isn’t guaranteed. If they’re not well designed and executed, shortfalls in data and logic can lead us to false conclusions (as with any method). Worse, these failings are sometimes deliberate. Models can be manipulated to garner support for a predetermined course of action. Findings can be reported on a “trust us, we’re experts” basis rather than opening up the data and reasoning to scrutiny.
Here are a few principles to ensure systematic analysis of costs and benefits provides rigorous and meaningful evaluations. More detail in the links…
🤝 Stakeholder engagement: The perspectives of people affected by the program can support study design and interpreting findings, helping to ensure CBA (and wider evaluation) is contextually relevant, reflects diverse values, and meets needs.
🗒️ Describe the value in words before considering what methods or metrics to use. If we can describe a policy or program’s value proposition, we can evaluate how well it’s being met.
⚙️ A robust approach to causal inference: Define the counterfactual scenario and ensure the analysis doesn’t assume the intervention caused outcomes without proper evidence. SROI guidance calls this ‘avoiding over-claiming’ but I think it’s worth distinguishing two different aspects - firstly, establishing whether there is any causal claim to be made at all, and secondly, ensuring that the quantification of attributed impacts and value are appropriately conservative.
🐕 Know the pedigree of your valuations: There are various ways of putting monetary valuations on intangible outcomes. Sometimes, valuations borrowed from other contexts are compiled into a database or ‘social value bank’, providing a ready source of monetary valuations for intangible things like cultural connectedness, having friends, or experiencing improved mental health. It’s important to know where the numbers came from (e.g., context, methods and assumptions) so you can assess whether they’re reliable and relevant to your context.
🎛 Address uncertainty: We may not have precise measurements of all the variables we need, so there’s often an element of scenario modelling, blending program data with external evidence and assumptions. Sensitivity analysis can reveal a lot about how these modelling decisions and their associated uncertainties can affect findings and conclusions.
🔎 Transparency: Designing and conducting a CBA or SROI involves multiple analytic decisions that can affect conclusions. Transparency about those decisions allows us to assess the quality of the study and whether it meets our information needs. Lack of transparency erodes credibility.
⚖ Recognise strengths and limitations of methods: Systematic analysis of costs and benefits contributes unique insights but may not provide a complete answer to a value-for-money question. Understand what it’s not telling you.
🥗 Mixed methods: It's often helpful to combine CBA with other methods to gain insights from additional criteria and evidence, providing a more nuanced understanding of complex social investments. CBA can enhance mixed methods, and mixed methods can enhance CBA.
💡 Evaluative reasoning: To guard against mechanistic "computer says" use of findings, make evaluative judgements in a transparent way, using explicit criteria and standards to interpret evidence (incl qual, quant, &/or econ) on an agreed basis.
Drawn from some of my previous Substack articles which address these points in detail:
I’m on Bluesky! If you’re on there, let’s connect. Here’s a link to my profile. And to connect with others, check out the Evaluation Professionals and Users Starter Pack thanks to Will Allen.