Contextually-responsive cost-benefit analysis
What would it look like to conduct CBAs that are inclusive, responsive, contextually viable and meaningful for people whose lives are affected?
Imagine we’re tasked with conducting a cost-benefit analysis (CBA) on a proposed initiative, to help our government determine whether it’s worthwhile to invest in. This kind of future-facing analysis is called ex-ante CBA. The initiative would introduce new healthy eating guidelines, based on the principle of consuming real or minimally processed foods and avoiding ultra-processed foods.

CBA, as we know, is an economic method of evaluation. It synthesises one criterion, one standard, and principally one kind of evidence, to help evaluators judge whether an initiative creates more value than it consumes. Here’s what that looks like in our ex-ante CBA of the proposed healthy eating guidelines:
For evidence, we use monetary valuations of costs and benefits. We take a whole-of-society perspective, estimating future costs and benefits over a time horizon of 20 years. Costs include producing and promoting the guidelines, and the change in resource costs for producers supplying more real foods and less processed foods to the population. Benefits include improvements in population health, reduced demand for health care, and associated improvements in national productivity.
Our criterion, baked into the theory and structure of CBA, is Kaldor-Hicks efficiency - that is, we are assessing whether the net effect of the various costs and benefits across our society will be positive or negative overall.
To set a standard, we select a discount rate. In effect, the higher the discount rate, the higher the hurdle for benefits to exceed costs. The discount rate varies depending on our location and the purpose of the CBA. In Aotearoa NZ, for example, The Treasury currently recommends a discount rate of 5% for appraising publicly funded interventions. The UK Government currently uses a discount rate of 3.5%.
Synthesis is performed using a spreadsheet to calculate the Net Present Value (NPV) - the total value of a discounted time series of benefits minus the total value of a discounted time series of costs.
Our analysis finds that the NPV is greater than zero, indicating that the proposed initiative is expected to create benefits that exceed costs (technically, we interpret the NPV as an indication that the initiative is better than alternatives, since the discount rate represents the value of the hypothetical universe of next-best things we could do with the resources).
The NPV, of course, is just a number. As always, it’s on us, the evaluators, to make a judgement. So we do that, and we conclude that the initiative is worthwhile.
CBA is a rigorous and disciplined approach to making people’s values explicit
Value is an attribute of people, not impacts. We may be able to observe impacts but to understand value, we need to see into the minds of people. There are many ways of understanding value. CBA is one. To illustrate, let’s consider how we might estimate the monetary value of benefits like improvements in population health.
CBA seeks to estimate stakeholders’ values by way of compensating variations, defined as the monetary value that should change hands to make a person equally as well-off under the proposed change as they would be in the status quo, based on their preferences.
To get a handle on these compensating variations, we have many options. Sometimes, the benefits in question are things that are actually bought and sold in real markets. If we assume these markets are reasonably well-functioning (e.g., buyers and sellers are well-informed, etc) then we can use market prices or wages as proxies for compensating variations.
There isn’t an obvious market for improvements in population health but if we think creatively, we can perhaps infer something about the value people place on a human life. For example, some researchers have compared market wages of high-risk and low-risk occupations requiring similar skill levels, to estimate the premium people expect to earn for risking their lives. This is just one way of working out the value of a statistical life year.
Another option is to conduct a survey and simply ask people to state what they would be willing to pay for improved health. The problem is, there's often quite a gap between what people say they are hypothetically willing to pay and what they actually turn out to be willing to pay in a real situation.
In lieu of a real market for buying and selling improvements in population health, another option would be to construct a pretend market, a sort of valuing game, in which people would reveal, through a set of trade-offs, what they would be willing to pay for living longer and in better health if such a market did exist. An alternative proxy could be based on people’s willingness to pay for improvements in life satisfaction.
There are more options too, each with their own strengths and limitations. These various options can sometimes yield very different estimates from each other. We won’t let this deter us, it’s just a fact of life when we’re valuing intangibles. It’s just another reason why evaluative thinking is so important in our work.
What might we not notice, if we only did a CBA?
CBA, like all methods, has strengths and limitations. For example, as I mentioned above, our CBA evaluates the proposed healthy eating guidelines against one criterion: whether they would make society better off overall (irrespective of how they may affect different people or groups within society).
Other perspectives may be just as important. Here are three examples.
1. Different people value things differently
CBA is a tool for aggregating values. Sometimes, we need to unpack value from different perspectives.
For example, say the healthy eating guidelines include recipe suggestions that reflect values and preferences of Western cultures and are incompatible with the cultural traditions of some minority groups. These conflicting values would be invisible in the CBA, but they’re likely to contribute to disparities in health outcomes between different groups. The CBA might still ascertain that the guidelines confer a net social benefit, but we might not notice that the benefit is unevenly distributed and below its full potential.
To mitigate this risk we can supplement CBA with other methods, such as surveys and focus groups, to test the accessibility and acceptability of the guidelines to different groups of people. With this mix of methods, we can balance NPV with cultural acceptability.
2. Costs and benefits affect people differently
People in low socioeconomic areas often live in “food deserts” where ultra-processed food is more accessible and affordable than real food. Introducing the healthy eating guidelines could make society better off overall while at the same time increasing health disparities between socioeconomic strata. CBA could justify approval of the proposed initiative despite its negative consequences from an equity perspective, based on its inability to improve nutrition in low-income neighbourhoods.
To mitigate these concerns, the initiative could implement complementary strategies to address structural factors that give rise to food deserts. The anticipated costs and benefits of these strategies can be incorporated in the CBA. In order to determine the acceptability of these complementary strategies to consumers and food producers, we will need to supplement the CBA with consultative methods. With this mix of methods, we can balance NPV with equity of impacts.
3. Different groups might have values that conflict with each other
We anticipate that the food and pharmaceutical industries, both of which profit from widespread consumption of ultra-processed foods, might oppose attempts to improve population nutrition. We could adapt the CBA to examine costs and benefits of healthy eating guidelines from the perspectives of consumers and producers respectively, and perhaps find that consumers stand to benefit while industries face a reduction in profits.
To the extent that the values of industry lobby groups conflict with the health and wellbeing interests of the population, we cannot make a valid evaluative judgement from a whole-of-society NPV alone. However, by estimating separate NPVs from the perspectives of the two groups, our analysis could contribute important information to a wider process of deliberation by clarifying tensions between different sets of interests. This mix of methods can help decision-makers to balance the ethics and politics of population health and commercial interests.
Contextually-responsive CBA requires stakeholder engagement
CBA is often conducted on a desktop basis, without engaging stakeholders to understand what they value, nor to seek their input into what methods are appropriate and acceptable in the contexts where the evaluation will have consequences.
You can conduct CBAs that are inclusive, responsive, contextually viable and meaningful for stakeholders. There’s just nothing in the CBA methodology that says you should.
Program Evaluation Standards, however, are clear that evaluations should be done with (or by) stakeholders, not to them or for them or without them. For example, one set of standards, from the US, includes:
Negotiated Purposes: Evaluation purposes should be negotiated based on stakeholder needs, considering different perspectives
Meaningful Processes and Products: Evaluations should “encourage participants to rediscover, reinterpret, or revise their understandings and behaviours”
Concern for Consequences and Influence: Evaluations should “promote responsible and adaptive use” and guard against “unintended negative consequences and misuse”
Contextual Viability: Evaluations should recognise and balance the cultural and political interests of stakeholders, acknowledging potential power imbalances and responding to diverse needs in a balanced way
Responsive and Inclusive Orientation: Evaluations should be responsive to stakeholders, including them in a systematic and transparent manner, building meaningful relationships and valuing diverse views and interests.
Another set of program evaluation standards, from Aotearoa New Zealand, emphasises related principles such as: respectful, meaningful relationships; responsive methodologies and trustworthy results, among other things.
These standards are clear that evaluations should not be designed and decided solely by those who pay for it and those who do it for a living. The real experts in a policy or program include those whose lives are affected by it. They have a right to a voice throughout the evaluation - understanding the proposed intervention, determining the basis upon which evaluative judgements should be made, determining what evidence is needed and will be credible, how the evidence should be gathered and analysed, what the evidence shows, what the evidence means, whether the mix of evidence and explicit values suggests the healthy eating guidelines are worthwhile to invest in, and what adaptations might make the guidelines more fit-for-purpose and worthwhile.
As with all evaluation methods, our use of CBA should be negotiated and not preordained. We must not assume “that a technically excellent evaluation is sufficient for positive use and effective influence”. We need to stay open to contradictory views and interests, avoiding “favouring a specific evaluation method or approach without proper regard for the needs of the actual stakeholders in the current setting and the purposes of the evaluation”. We must attend to context, culture, and “the political vibrancy and inherent value of stakeholder positions and value judgements”. If we are to provide appropriate mechanisms for meaningful stakeholder input, we need the flexibility to determine an appropriate method or mix of methods taking that input into account.
These standards would be difficult to meet using CBA alone - but entirely possible to meet with a participatory, multi-method approach.
A scenario for contextually-responsive CBA
In the scenario above, I opened with: “Imagine we’re tasked with conducting a cost-benefit analysis…”. Let’s back up the truck. Who decided we should conduct a CBA? How do we know it’s going to meet the needs of stakeholders and produce a meaningful, viable, responsive evaluation? Here’s an alternative scenario.
Before deciding what methods we should use to evaluate the proposed healthy eating guidelines, we establish a citizen panel. We explain to the panel the objectives of the study, seek their input into scoping evaluation questions, criteria and standards, and discuss methodological options. This includes considering the potential contribution of a CBA, bearing in mind how CBA works, what it could tell us that we can’t get from other methods, and what it can't tell us. We also discuss options for conducting a wider analysis not limited to economic methods and metrics alone.
One of the possible outcomes of this consultation is that citizens might not support the use of CBA. In this instance, however, the value of CBA to the evaluation is clear and stakeholders support its use in conjunction with other methods. The consultative process has enhanced the evaluation design and strengthened stakeholder support.
With the feedback from the citizen panel, we return to the commissioners of the evaluation and successfully advocate for a mixed-methods approach which combines CBA with political economy analysis and citizen engagement through interviews, surveys and focus groups. The evaluation design will use rubrics to guide transparent balancing of NPV with other criteria such as equity of impacts, cultural acceptability, and commercial interests.
By engaging stakeholders, we have expanded commissioners’ understanding of why value for money and CBA are not synonymous terms, and why CBA, though valid and helpful, is insufficient in this case to provide complete answers to their evaluative questions. Our evaluation design now incorporates multiple forms of evidence and ways of creating knowledge, which should enable better-informed and more nuanced evaluative judgements.
We engage again with the citizen panel when reviewing preliminary findings of our study and considering implications for policy making. We present back the results from our mixed-methods evaluation and facilitate a discussion to validate, contextualise, and/or challenge what we think we’re seeing in the findings. We invite the commissioners to this forum too, anticipating that this process may catalyse new understandings for all participants. The process also throws new light on the evaluators’ understanding of the evidence, prompting some additional data collection, analysis, and reinterpretation of findings.
In the end, the evaluation meets decision-makers’ requirements for an overall assessment of the net social benefit of implementing healthy eating guidelines - and in fact it meets this brief better than it would have done if we had used CBA alone, because our approach identified unanticipated issues that, if addressed, will be more likely to improve the acceptability and adoption of the guidelines, ultimately leading to greater and more equitable impacts.
Conclusion
Many evaluators have for a long time accepted the principle that evaluations should be open to scrutiny, to check and ensure their quality. There is no universal checklist for this purpose, reflecting the fact that there’s no universal position on the matter of ethics in evaluation. However, many organisations have developed quality frameworks or principles defining what a good quality evaluation should look like. Each of these frameworks is the culmination of debate and serves to formalise some degree of consensus about evaluation as a field of practice.
The Program Evaluation Standards (PES) of the Joint Committee on Standards for Educational Evaluation are a leading example. The PES comprise 30 standards, organised under five headings: Utility, Feasibility, Propriety, Accuracy, and Accountability, together with guidance for people involved in planning, implementing, or using program evaluations. The PES are commonly cited in program evaluation, share many principles in common with other evaluation standards, and have influenced other evaluation standards internationally.
The PES argue that evaluation should take an explicit interest in its effects on people’s lives. Therefore, in addition to valid evaluative reasoning and careful selection of methods, evaluation requires attention to stakeholders, concern for consequences and influence, responsive and inclusive orientation, the protection of human rights and dignity, and a range of related considerations.
Another example, New Zealand’s evaluation standards, are conceptualised around an overarching principle of “evaluation with integrity”. They argue that evaluators have an ethical commitment to contribute to the wellbeing of society and, accordingly, evaluation practices, processes and products should ensure trust and confidence.

CBA can adhere to the standards, but only if some conditions are met. If CBA is used in combination with other methods, it is possible to work in an inclusive and responsive way, with the full range of stakeholder values, and to conduct evaluations that are contextually viable and meaningful for stakeholders. Sometimes, adhering to the standards might involve deciding not to use CBA.
If CBA is chosen in advance as the sole evaluation method, it may fall short of standards for explicit values, negotiated purposes, meaningful purposes and products, concern for consequences and influence, contextual viability, and responsive and inclusive orientation.
Therefore, we should regard CBA as one tool in an evaluator's toolbox, to be used in contextually responsive ways, and in combination with other methods - not a complete, preordained, or unconditionally preferred method of evaluation.
Key references
King, J. (2023). How should Program Evaluation Standards inform the use of cost-benefit analysis in evaluation? Journal of MultiDisciplinary Evaluation. Vol. 19, No. 43.
ANZEA & Superu. (2015). Evaluation standards for Aotearoa New Zealand. Wellington, NZ: Aotearoa New Zealand Evaluation Association and Social Policy Evaluation and Research Unit.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K.,& Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Sage.
This post builds on last week’s: Cost-benefit analysis through an evaluative lens.

