The Society for Benefit-Cost Analysis1 recently shared a blog titled: On Balance: Will U.S. Regulatory Benefit-Cost Analysis Survive? by: Lisa A. Robinson, Harvard T.H. Chan School of Public Health
I was particularly taken with the author’s description of CBA as a “voyage of discovery” - a view I strongly share:
Textbooks describe the goal of benefit-cost analysis as assessing economic efficiency, i.e., estimating the net benefits of policies to determine how to best allocate resources so as to maximize social welfare. In practice, benefit-cost analysis can be better described as a voyage of discovery, promoting systematic exploration of the evidence on policy impacts. We generally lack the data, time, and resources necessary to quantify and value all important impacts or to investigate all feasible policy options, limiting the extent to which the analysis is able to identify the most cost-beneficial approach. Yet if well-conducted, benefit-cost analysis uncovers impacts that might be otherwise unanticipated, synthesizing evidence from multiple sources and investigating the implications of uncertainty. It identifies the direct costs likely to be borne by organizations or individuals tasked with implementing the policy, as well as the types and magnitudes of the resulting benefits. This information in turn is useful in determining who might support or oppose a policy, as well as whether and how a policy could be redesigned to have more desirable effects. What is most important is not the end result, rather it is what we learn along the way.
To me this is a great description of the value of systematically analysing benefits and costs, and why I recommend including sensitivity analysis, scenario analysis, break-even analysis, distributional analysis, and mixing CBA with other methods.
CBA in U.S. regulatory context
In the Netflix series Zero Day, former president George Mullen (Robert De Niro) says: “Freedom is what allows people like you to do whatever you want. Liberty is what protects the rest of us from people like you”. This line, delivered during a tense confrontation with a tech billionaire, highlights a difficult balance between individual autonomy and collective security.
Regulations exist to manage that balance by ensuring fair play and safeguarding public wellbeing, but they also impose compliance costs on businesses and individuals. Judging whether proposed and existing regulations do more good than harm is essential, especially since the cumulative effect of multiple regulations (even good ones) can inadvertently stifle economic activity and erode social wellbeing. For example, a side-effect of overlapping environmental and financial mandates is that they may create administrative burdens that discourage innovation. Striking the right balance between necessary oversight and economic vitality is a perennial and deeply contested challenge for policymakers, with both economic and political dimensions.
CBA has been a cornerstone of U.S. regulatory policy for over 40 years, brought in by the Reagan Administration to weigh benefits of proposed regulations against their costs with the intent of ensuring new regulations maximise social welfare. Executive orders have at different times expanded and restricted the scope of CBA. Cuts to agency staffing and budgets have affected the quality and frequency of analyses. Recent developments, especially under deregulatory-focused administrations, raise concerns that regulatory analysis may increasingly emphasise economic costs while neglecting broader societal benefits, potentially undermining CBA’s original purpose of maximising social welfare. This could bias CBAs against regulation, if cost considerations dominate the analysis. The future of CBA in U.S. regulation is uncertain: whether or not it provides robust analyses for balanced decision-making depends on political will and funding.2
Not all regulations are subjected to CBA. While the CBA process is designed to promote systematic, evidence-based decision-making, in practice, agencies often lack the data, time, and resources to fully quantify all impacts or explore every policy alternative. Historically, only about half of major regulations have included estimates of both costs and benefits, with analysis often hampered by limited resources, analytic challenges, and shifting political priorities.
But there’s more to it than just CBA…!
Quite. CBA is a valuable tool. Robust CBAs can inform sound regulation. As Sunstein (2018) wrote:
Whether or not an analysis of costs and benefits tells us everything we need to know, at least it tells us a great deal that we need to know.
However, as this quote also alludes, CBA doesn’t tell us everything. The wisdom we gain from CBA (when feasible) should be combined with broader considerations. CBA should be just one input among many. As I’ve argued many-a-time, what we need is an interdisciplinary approach, combining insights from economics and evaluation. It should engage stakeholders to understand what matters to them, consider as many criteria and sources of evidence as are necessary, and synthesise them through an explicit evaluative reasoning process to reach a well-formed and well-informed decision. These are foundational principles of the Value for Investment system.
And indeed that is how it happens.
Regulatory review, as overseen by the Office of Information and Regulatory Affairs (OIRA), is a multifaceted process. While CBA is central for systematically evaluating economic, social, and environmental impacts of regulations, OIRA’s review process also looks at consistency with statutory requirements and presidential priorities. The process incorporates a range of analytical methods besides CBA, including cost-effectiveness analysis,3 distributional analysis to assess equity impacts, and robust stakeholder consultation to gather diverse perspectives and identify potential unintended consequences. Multi-criteria decision analysis (MCDA), a technocratic synthesis framework, is used to integrate these multiple inputs, enabling transparent decision-making when weighing complex trade-offs.4
Bottom line:
CBA can enhance mixed methods, mixed methods can enhance CBA, and explicit evaluative reasoning provides the means to bring it all together. These tenets are as applicable to U.S. regulatory analysis as any other policy context. There is robust analytical machinery in place. Now it’s up to politics to use it wisely.
Dig in.
Robinson, L.A. (2025). On Balance: Will U.S. Regulatory Benefit-Cost Analysis Survive? Blog post. Society for Benefit-Cost Analysis (Apr 30).
In other news
When new reports become publicly available that illustrate VfI principles in action, I add them to my resources page.
Last week at the UK Evaluation Society Conference in Glasgow I attended an excellent presentation where the Department for Science, Innovation and Technology (DSIT) and an Itad consortium presented their VfM assessment of the £1.5 billion Global Challenges Research Fund, sharing their reflections on the principles, benefits and challenges of using rubrics and mixed methods to evaluate VfM. Here’s their report:
Barnett, C., Vogel, I., Hepworth, C., Guthrie, S., Coringrato, E., Puri, I., Wade, I., Rodriguez Rincon, D. (2025). Value for Money Assessment: Global Challenges Research Fund. Research Paper Number DSIT 2025/010. Department for Science, Innovation & Technology, London.
Meanwhile, the United Nations Sustainable Development Group's System-Wide Evaluation Office has launched its new website - including its VfM assessment of the Spotlight Initiative, a flagship programme of the Secretary-General to end all forms of violence against women and girls:
Carou Jones, V., Tywuschik-Sohlstrom, V., Chua, N. (2024). Value for Money Assessment of the Spotlight Initiative. October 2024. SWEO/2024/001. United Nations Sustainable Development Group System-Wide Evaluation Office, New York.
And Te Whatu Ora (Health New Zealand) has recently released Dovetail Consulting's report on impaired driving rehabilitation programmes:
Moss, M., Butler, R., Garden, E., Parslow, G., Porima, L., Schiff, A., Spee, K., Field, A., Gregg, O., King, J., McKegg, K. (2024). Impaired Driving Rehabilitation Programmes – Evaluation Report. Report for Health New Zealand Te Whatu Ora. Dovetail Consulting Limited, Auckland.
Congratulations to all three teams on completing these significant evaluations.
Thanks for reading!
There is no substantive difference between "benefit-cost analysis" and "cost-benefit analysis." Both terms refer to the same systematic process of identifying, quantifying, and comparing the expected or actual benefits and costs of a decision, project, or investment to determine its overall value or feasibility. The terms are used interchangeably in economics, business, and public policy. I will stick with my abbreviation of CBA.
The Trump administration jumped straight in to reform regulatory impact analysis, prioritising deregulation speed, centralised oversight, and cost reduction over comprehensive impact assessment. Biden-era guidance was scrapped in favour of older, more restrictive guidance, dialling back emphasis on equity and distributional effects while favouring higher discount rates, which in effect raises the bar for justifying new regulations. A new Executive Order also upped the ante by mandating the repeal of ten regulations for every new one, a big escalation from the two-for-one policy under Trump’s first Presidency, and imposed tougher caps on regulatory costs. For the first time, independent regulatory agencies, such as the SEC, FTC, and FCC, are now required to submit proposed and final regulations to the White House for CBA and centralised review. And by lowering the economic impact threshold for OIRA review, the administration has ensured that a far greater number of regulations will be subjected to centralised review. Overall, I expect the net effect of these reforms will be to favour economic growth and business interests, at the expense of some social protections and equity considerations.
Cost-effectiveness analysis (CEA) is different from CBA. In particular it doesn’t monetise benefits. Instead CEA calculates the ratio of costs (measured in money) to an outcome (measured in natural or physical units, such as life-years gained or tonnes of carbon dioxide emissions averted). For the ratio to be meaningfully interpreted, it needs to be compared with the costs and outcomes of the next-best alternative intervention.
While multi-criteria decision analysis (MCDA) is a widely-used tool for structuring complex decisions, I think rubrics are usually superior. Good MCDA requires quantifiable criteria and empirically-derived weights. If you’re dealing with qualitative evidence or intangible value, rubrics will free you from pulling numerical weights out of the air. Rubrics provide transparent criteria and rating systems that are intuitive to co-design, interpret and communicate. They help ensure validity and credibility in evaluation, facilitating clear findings and supporting evaluation use. Unlike MCDA, which can sometimes obscure subjective judgements behind weighted scores, rubrics lay out criteria and standards explicitly, making deliberation and decision processes challengeable and defensible.