4 Comments
User's avatar
PeppermintWater's avatar

The reductionist approach to health research has never sat well with me when it comes to the reality of patient's lives. I recognise RCT's are important for drug trials etc, but I shudder when I hear 'evidence based practice' in a clinical setting - when the probability is that it has failed to acknowledge all of the issues raised here. Additionally, the history of medical research is such that many of the studies that have informed current practices were conducted primarily on male bodies... so the very foundations of so called 'evidence' are evidence for an average male (usually white US college student). It is well overdue that diversity AND complexity are better embraced in evaluation and knowledge generation.

Expand full comment
Bub-sur-mer's avatar

Excellent article, Julian. One of the main reasons I subscribe to this substack is how you find tools and approaches from other disciplines and unpack the crossovers to evaluation.

Using more system thinking and tools of pattern recognition will advance both medical research and evaluation. However, I'm disappointed that the authors of the original article misappropriate and make so much use of the term "ecosystem". Unfortunately, we have taken a very important concept describing how nature and environmental issues affect society and turned it into a near-meaningless buzzword' Business, politics, technology, the arts-- you name it, they all have an "ecosystem". My eyes glaze over now when I see it used. So, while the original article made some good points, the multiple references to ecosystems lowered its value materially.

My advice as you continue to build and improve the framework, is to find an evaluation-appropriate way to describe (brand?) the interconnectedness of factors and influences and how we think about them.

Expand full comment
Well Fed's avatar

In your experience, are resource allocation and maturity factors in the extent to which complexity is embraced?

For example, a more reductionist RCT study is probably more cost-effective for trialling a new drug with a large sample size than a complex mixed-methods evaluation? While all those contextual questions about the child with asthma are certainly the right questions to be asking, who is actually doing (and paying for) this investigation? Does every patient get this treatment? How are findings and recommendations efficiently fed up the chain to decision makers?

Similarly with maturity, do programs/initiatives/interventions that are more mature attract more $$ than, say, a pilot program with a small budget that can't possibly afford an in-depth evaluation (it takes time and money to do consultation, focus groups, engagement, mixed methods stuff).

Expand full comment
Julian King's avatar

I agree resource allocation and program maturity are probably important factors. Programs with more resources, whether due to stable, long-term funding or public investment, are better positioned to undertake complex, mixed-methods evaluations that capture contextual nuances and stakeholder perspectives. Resource-limited or early-stage programs/pilots may rely on simpler methods, at lower cost. The maturity of a program also likely plays a role; mature programs may attract more funding and have established processes, making it feasible to invest in deeper, more complex evaluations. Early-stage initiatives, with limited budgets and uncertain futures, may not justify or afford such investment. Ultimately, I suspect the interests of the payer also shape these decisions. Public funders are interested in things like equity and context and scalability, supporting more complex evaluations, while private funders may focus on things like safety and efficacy (and profitability), favouring simpler designs. These choices influence not only the evaluation methods used, but also how findings are communicated and acted on by decision-makers.

Expand full comment