Social Return on Investment (SROI)1 is guided by eight social value principles, one of which is “Do Not Overclaim” the value attributed to an organisation’s activities. The idea is to take a conservative approach and ensure reported impacts aren't exaggerated.
The problem is, this principle can be applied without addressing a more fundamental issue: whether the activities caused anything to happen at all.
Welcome to this week’s instalment of Old Man Yells At Cloud.
How it works
SROI avoids overclaiming by applying four key filters: deadweight, displacement, attribution, and drop-off.
Deadweight estimates the proportion of an observed change that would have happened anyway, even without the intervention.
Displacement asks whether the intervention simply shifted a problem or effect elsewhere.
Attribution2 examines the extent to which other organisations or factors may have contributed to the outcome.
Drop-off accounts for how the impact of an intervention diminishes over time.
Together, these filters trim the “outcomes” down so we can show that we’re being conservative and not exaggerating our impact.
For example
Say a public health campaign aims to encourage people to quit smoking. One of our key outcome measures is the increase in the number of people who initiate smoking cessation by contacting the quit line. After implementation, new contacts increase by 2,000 in the first month. Let’s apply the filters:
Deadweight: We know there’s a seasonal pattern of calls to the quit line and we estimate, from the patterns seen in the last few years, that 25% of the increase may reflect this normal seasonal variation rather than our program.
Displacement: Some of the people who called the quit line may have already been planning to talk to their family doctor about giving up smoking. By making them aware of the quit line we weren’t adding them to the quitters’ club, just moving them to a different point of entry. We don’t know how many people fall into this category but in the interests of not overclaiming, let’s assume it’s 10%.
Attribution: At the same time as our campaign, there was also a news item that did the rounds on social media, emphasising how quitting smoking reduces risks of heart disease and lung cancer. Some of the extra calls to the quit line may have been catalysed by this article and not our campaign. Let’s be conservative and take another 15% off for this.
Dropoff: Our campaign only has one year’s funding but we are hoping its messages will be remembered and have some effect over the longer term. Based on literature on similar campaigns with similar populations, we assume the effect of the campaign will drop off by 20% each year.
The first three filters collectively reduce our effects by 50%, to 1,000 new quitters per month. The third is slightly more complicated and progressively reduces this number over time. Together, the four filters imply that over the next 5 years, our campaign will cause approximately 40,000 extra people to initiate smoking cessation.
We acknowledge it’s uncertain, so we also do some scenario analysis to explore the social value of the campaign under a range of higher and lower assumptions, getting a range of 35-45,000 new quitters.
See what we did there?
If we’d simply extrapolated the first month’s result for the next five years we would have estimated an extra 120,000 calls to the quit line. But that would have been overclaiming. We have made an effort to avoid overclaiming by cutting the number down to something more believable, around one-third of the big number. Great job, team! High five! 🖐️
But wait a minute
How do we know the campaign was responsible for any of the uptick in calls? Exercising caution about magnitude doesn’t address whether there’s a reasonable claim that a causal relationship exists at all.
Causal inference requires defining the counterfactual scenario (what would have happened without the intervention) and systematically ruling out alternative explanations.
In fairness to SROI, the guidance does mention the possibility of using between-group comparisons to address deadweight (along with other possibilities like trend analysis, benchmarks, literature, or expert opinion). However, directly addressing causality isn’t a requirement, it’s a choice the analyst might make when they apply the four filters. A recent article about SROI observed that the attribution of benefits was highly variable in its reliability and often failed to meet established standards for inferring causality.
To be clear, I don’t mean to imply that experimental designs are the only way; causal relationships can be inferred to an acceptable degree of robustness through a range or combination of strategies, such as:
A theory of change or realist logic to propose for whom, why and how the intervention should work;
Well-designed time series or other observational studies to test logical links between actions and impacts;
Contribution analysis to systematically assess the plausibility of an intervention’s contribution to observed outcomes, through iterative testing of a theory of change and triangulation of evidence;
Checklists like Jane Davidson’s and Sir Austin Bradford Hill’s, and rubrics such as Aston & Apgar’s to interrogate evidence from multiple perspectives;
Stakeholder knowledge (practitioners, participants, subject matter experts) - using structured processes such as qualitative impact protocol to reduce potential bias; and
Literature review (which may inform any of the above).
How you do it is a contextual decision.3 There are no gold standards (except critical reasoning, as Michael Scriven argued, and of course transparency).
Determining whether a cause-and-effect relationship exists is challenging and done to varying degrees of imperfection by me, you, and in evaluation and research generally. I think trimming the outcomes while sidestepping the causal question is problematic, but reliability of causal attribution isn’t just an SROI problem.
Bottom line: first establish if there even is a causal claim
Before concerning ourselves with making conservative claims, the first step must be to determine whether any causal claim is justified at all. Only after establishing a causal link does it make sense to discuss how modest to be about the impact. “Avoiding overclaiming” is about caution; causal inference is about validity. A good SROI needs both. Conflating the two undermines the credibility of the analysis.
The Social Value International webpage on Principle 5 (Do Not Overclaim) notes that the standard on applying this principle is due to be reviewed. I hope that the next edition will provide clearer guidance on addressing causality, and that the extra clarity will flow through to future SROI analyses.
Thanks for reading!
I’d like to acknowledge Dr Jay Whitehead for kindly agreeing to peer review this post. Errors and omissions are my responsibility. This post represents my opinions and not those of any organisations I work with.
Also see
The Guide to SROI - Social Value International.
Supplementary Guidance for estimating Deadweight and Attribution - Social Value International.
Fujiwara, D. (2015). The Seven Principle Problems of SROI. Simetrica.
Previous posts on SROI
Social Return on Investment (SROI) is a widely used framework for measuring the social, economic, and environmental value generated by organisations, especially in the nonprofit sector. Unlike financial ROI, which focuses solely on spending and income from a business venture, SROI seeks to capture the broader impact of programs and initiatives by assigning monetary values to outcomes that matter to stakeholders. The result is typically expressed as a ratio, showing how much social value is created for every dollar invested. While SROI offers a systematic and (in principle) transparent approach to demonstrating value, it also demands careful data collection and engagement with stakeholders to ensure the value being measured reflects real-world changes that people care about.
Look, I didn’t pick these terms. I’m just as confused as you are by the use of “attribution” here. All four filters should be about attribution, though the guidance makes no requirement to address it directly.
For example, SROIs can be done ex ante (estimating potential future value) or ex post (assessing actual results). Sometimes, a hybrid approach is used, combining results-to-date with projections of longer-term value. Ex-ante estimates often draw on literature - ideally studies on the same intervention, population and country, though contextual differences are common. Ex-post estimates should prioritise actual program data and credible causal inference strategies. Some assumptions, such as counterfactuals and valuation factors, may still be informed by literature - and it’s important to document and justify these choices transparently.
I didn't realize that SROI was so squishy about causation! I do like the "four filters" since they are good vocabulary to help stakeholders understand why we can't just use static group comparisons.
But thanks for the insights.