VfI memology
A fun look back at some of my attempts to communicate VfI concepts with memes and metaphors
I’m always playing with ways to make ideas sticky. I’ve been reflecting on some of my creative attempts to make evaluative and economic concepts relatable and fun. From goofy memes that’ll make you groan (or possibly chuckle?) to metaphors aimed at capturing the essence of niche topics, I’ve played with various ways to bring Value for Investment (VfI) concepts to life. In this post I’ll share a few examples.
I’ll start with one of my favourites. Using two words, this application of the distracted boyfriend meme conveys one of the key challenges in the politics and practice of value-for-money (VfM) assessment.
On a similar theme, the following cartoon, from a “make your own” template by Chris Lysy to which I added the dialogue, dates back to 2018 when I was still on Twitter.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5237d16b-fa03-46b1-801d-ecc31ff6eac2_1760x1312.png)
The Futurama Fry meme came to mind when I was trying to get to the bottom of mixed methods, bricolage, and eclectic methodological pluralism - are they distinct concepts or just different ways of saying “bring your whole toolbox”?
Involving stakeholders is an ethical imperative in rubric development, to surface stakeholders’ values, allow people to hear the different perspectives involved, and guard against evaluators’ and commissioners’ biases skewing the evaluation. Don’t just take my word for it, listen to Boromir…
What’s the relationship between evaluation and research? Big, big question, with multiple answers. See this paper by Dana Linnell Wanzer for a nuanced discussion. Here’s one concept I’ve played with, highlighting a dynamic balance in which research (knowledge generation) and evaluation (value determination) are complementary and coexist in a cyclical, interdependent relationship. Evaluation relies on research for evidence, while research depends on evaluation to ensure its quality and value.
I’ve played with various metaphors for evaluative reasoning. For example, it’s a bridge that gets us from empirical evidence (independent observations of a program and its impacts) to a value judgement (a determination of its merit, worth or significance)…
Evaluative reasoning is also a hamburger, sandwiching evidence between two slices of evaluative bread - the top slice being the criteria and standards, the bottom slice representing the judgement-making process…
Evaluative reasoning is also a prism, representing a set of explicit values through which we synthesise multiple pieces of evidence, producing a lucid beam of evaluative clarity. I first used this metaphor in a presentation where I reversed a photo of a certain album cover from 1973. Eventually it became the VfI logo.
There are many different ways of implementing evaluative reasoning. Rubrics are one way. My first draft of the following picture used the tired old “layers of the onion” metaphor but I was sitting in a cafe at the time, having avocado on toast and I thought why not? so…
Evaluative reasoning is a part of a bigger and more intricate picture. The way I see it, evaluation is supported by a logic called evaluative reasoning which, in turn, is supported by a mindset and practice called evaluative thinking. This post was my effort to make sense of it.
Full credit to Chris Lysy for the next one. It’s my absolute favourite of his cartoons and I included it in my article about objectivity and subjectivity in evaluation. To me, it gets to the heart of evaluative thinking with admirable parsimony…
In my ongoing mission to disrupt VfM assessment, I’m keen for evaluators to use economic methods of evaluation (such as cost-benefit analysis) more often. However, I want us to apply evaluative thinking to CBA, understanding its strengths and limitations and mixing it with other methods. The American Chopper argument meme came in handy for this…
I sometimes describe CBA as a blender for values, because it whizzes together costs and benefits, along with a third ingredient called the discount rate, summarising the result in a single indicator like a benefit-cost ratio (BCR) or net present value (NPV). There’s more to CBA than just the number, of course. But when the results of a CBA start to travel (whether by newspaper, Cabinet paper, word of mouth, etc) it’s the number that earns the most air miles.
Same idea, different picture. Which do you prefer?
We’re entering an era in which artificial intelligence (AI) takes over more and more of the tasks in an evaluation, which I hope will free us human evaluators to focus on the essentially human aspects of evaluation such as stakeholder engagement, evaluative reasoning, and evaluative thinking. For example, in this article I used a Large Language Model (LLM) called Perplexity to brainstorm a value proposition for a new government agency (e.g., to suggest to whom the agency’s work might be valuable and in what ways).
There are risks to using AI, one of which was presciently reflected in the following 1927 poem by AA Milne. If Christopher Robin wasn’t sure of the right answer to a question, he would ask Winnie The Pooh, with the logic that “if he’s right, I’m right, and if he’s wrong, it isn’t me”. This insulated Christopher from taking responsibility for the answer, but nothing good can come of it in an evaluation. Don’t be like Christopher!
If you’re still here, thanks for reading!
New memes coming soon (probably). Stay tuned.