Julian King & Associates turns 21 this week! This is the story of how I fell into evaluation and a personal thank you to the mentors, collaborators and clients who influenced my career path. I’ll name a few. There were many more.
Hi, my name’s Julian, and I am an evaluator. Any kid ever said, ‘when I grow up I want to be an evaluator’? I hope they’re out there. It’s the best job no-one has heard of. Most evaluators I know fell into it from other fields like public health, market research, statistics, law… even dentistry. I suspect this contributes something to its richness.
We’re all evaluators, from birth. We’re not taught how. We just do it instinctively. It’s part of being human and part of our success as a species. But we’re also susceptible to biases that have possibly contributed to humanity’s worst failings. When I started out, all I had to guide me were those instincts and biases. My education had not prepared me for this work. What series of fortunate events led me into program evaluation?
I discovered Policy by accident
As a new grad I took a short-term contract in Wellington to assist in a project for a government agency, implementing a brand new thing called the Community Services Card (it still exists, unlocking cheaper health care for people on low incomes). We shared a floor with a group of suits who were developing something high powered and secretive to go to Cabinet.
Smoke-free workplace legislation had just been brought in. Smokers would stand outside for a break, ten minutes every hour. As a non-smoker, I felt that was inequitable so I used to follow them downstairs and socialise. I enjoyed the company of the mysterious dark-suited ones. Their work felt exciting, dynamic and meaningful. I wanted some.
I signed up for the Master of Public Policy program at Victoria University of Wellington - a truly stimulating learning experience and excellent preparation for a career in policy. I learned the trade in policy teams in New Zealand and Canada.
Policy work is varied, but a core aspect is appraising options and providing advice to political decision-makers. Analysts define a need or problem, scope options for addressing it, develop decision criteria, examine the evidence along with arguments for and against each option, and provide recommendations.
Years later, I learned that I had been applying something called the General Logic of Evaluation, which is central to the work I do now.
My first evaluation training was economic
Thanks to a great boss who nurtured my interest, I attended a course in Methods for the Economic Evaluation of Health Care Programmes at McMaster University, taught by Prof Michael Drummond and colleagues. Their ‘little blue book’ of the same title is to this day regarded as a seminal text. It was the first evaluation text I ever read and my only evaluation text for several years.
I went consulting and fell into program evaluation
I like public policy consulting. The variety and the sense of urgency and purpose are invigorating. You never stop learning. I think we make a difference too, which is what it’s all about - though ironically, evaluators don’t always get to see evidence of this.
A lot of the work turned out to be evaluation. It didn’t seem like rocket science. We worked with clients to define questions and scope. Then we worked out a plan for collecting the facts, figures, and feedback we needed. Our project management, relationship management, and our commitment to quality research and writing were on point. Generally I think our reports were useful. However, our reports presented conclusions that probably fell short of being evaluative judgements, and certainly weren’t explicitly evaluative.
Then everything changed. Kataraina Pipi helped me develop a new business plan, part of which included forging stronger connections with the evaluation community. She encouraged me to join ANZEA, AES and AEA, and introduced me to super-smart people like Dr. E. Jane Davidson. Jane’s rubric-based approach to evaluation was a game changer. More broadly, I came to realise that my academic training and my consulting instincts weren’t enough. There was a rich body of theory and practice underpinning evaluation as a discipline.😮 I had some catching up to do!
Soon after, I joined the Kinnect Group, a phenomenal peer group who make me want to be a better evaluator, and my learning accelerated.
Grappling with value for money questions
There aren’t enough consultants who cross the disciplinary divide between economic evaluation and program evaluation. So perhaps it isn’t surprising that I would often be sought out to look at value for money. It seemed to me that both disciplines brought important puzzle pieces but it wasn’t always easy fitting them together.
Sometimes we would put a lot of work into multiple methods, then when the report came out, important nuance would be swept aside in the excitement about a shiny number that said the program returned social value of $1.72 for every $1 invested.
Other times, programs were valuable in ways that didn’t show up in return on investment figures - for example, they addressed inequities, human dignity, self-determination, enhanced cultural or environmental value that didn’t easily or fully compute in a cost-benefit analysis.
A galvanising moment was when I heard a Professor of Medicine proclaim to a packed room that the “only way to really know” if something provides VfM is to conduct a randomised controlled trial (to attribute and quantify outcomes) followed by a cost-benefit analysis (to determine whether the outcomes, when valued monetarily, exceed the costs). From my own experience I knew this was too rigid a view but I couldn’t clearly articulate why, nor suggest alternatives. This continued to bug me, so the late Professor was an important influence and I thank him for that.
A potential solution
ANZEA invited me to provide a workshop on VfM at their conference. I wanted to offer something different from another course on cost-benefit analysis so I started exploring an idea that we might be able to systematically combine evidence from economics and evaluation, using rubrics to bring findings together into a unified judgement. I ran it by some critical friends – Kataraina and Jane, together with Nan Wehipeihana, Kate McKegg, Judy Oakden, and Dr Fiona Cram. There’s no substitute for a brains trust like that for wrangling with the hard stuff!
We saw potential. My Kinnect Group colleagues brought me in on some amazing projects and, thanks to clients who were open to trying new ideas with us, we started developing rubrics aimed at combining economic and other evaluative insights. It was practical and useful, and it added rigour and transparency in drawing threads of a performance story together. There was no guidance on how to do it. We were learning by doing.
A chance conversation with Prof Patricia Rogers was a crucial tipping point. Though I’d never seriously considered doing a PhD, by the time we’d finished our coffees the penny had dropped that I really needed to pursue the topic as a piece of doctoral research.
All very well in practice, but how does it work in theory?
I signed up for the PhD program at the Centre for Program Evaluation (CPE), University of Melbourne. A seasoned consultant, quite used to juggling the demands of multiple projects, I was blissfully and naively undaunted by what lay ahead.
Much to my surprise (and nobody else’s), I discovered that a PhD isn’t just one more project. It involved learning how to think and write like a theorist - a new language to me. My supervisors, Prof Janet Clinton, Laureate Prof John Hattie, and Dr Ghislain Arbour, were instrumental in guiding me down this unfamiliar road. The wider CPE staff and fellow students were a source of inspiration and support. Across the ocean, Prof Brian Yates and Dr John Gargani were generous with encouragement and feedback. The whole experience was as rewarding as it was challenging. I can’t recommend it enough.
Praxis
At the same time as the theoretical side of my research was bubbling away at CPE, the practical side kept on developing through my consulting work. Sometimes, academic life and consulting life collided in the most fortuitous ways. One of these happened at an evaluation conference in Chicago where I presented a prototype model for interdisciplinary evaluation of VfM. In the audience was Luize Guimaraes, who at the time was setting up an innovative new female economic empowerment program in Mozambique.
Luize invited me to work with her team in the MUVA program. This in turn led to an ongoing series of collaborations with Oxford Policy Management (OPM), designing and implementing VfM assessments in extraordinary programs with prodigious teams. Theory fed practice, and practice fed theory. We published a Guide to VfM assessment. Two OPM projects became case studies in my dissertation. Our collaboration continues and I’m happy to be working with colleagues in Oxford this week.
Disrupting VfM
MUVA, now an established Mozambican NGO, also inspired me to think differently about my mission. MUVA is a social incubator, developing and testing novel approaches, building the evidence base, and influencing others to take successful approaches to scale.
Similarly, my focus now is incubating, learning and scaling. I’m on a mission to share the Value for Investment system and support others to apply it in their contexts. Together, we’ll continue to innovate and contribute new ways of understanding how well complex policies, programs and social investments create value. This is a collaborative, multidisciplinary space that thrives on tackling real-world evaluation challenges in diverse settings.
If you’re reading this, you’re already part of the Value for Investment community. You’ll find free resources on my website and you can hit me up for training or to collaborate with your team. Let’s disrupt value for money together!