Unless you’re a first-time reader, you’ll know I’m a fan of rubrics. They’re the backbone of most evaluations I’ve tackled in the last 15(ish) years and it’s no exaggeration to say they’ve transformed the way I see and do evaluation.
Though it’s accurate to describe a rubric as a matrix of criteria and standards, and a way to make evaluative reasoning explicit, a deeper analysis would reveal that a rubric is also:
An expression of a shared set of values
A doorway into important conversations that might not otherwise happen
An inclusive, power-sharing, co-construction process to articulate what matters (criteria) and what good looks like (standards)
A vehicle for demystifying the process of making evaluative judgements, supporting understanding, ownership and use of the evaluation
A mechanism to help stakeholders recognise different viewpoints and ensure the evaluation doesn’t just reflect the values and assumptions of those who pay for it and those who do it for a living.
Something else about rubrics though - they ain’t easy
I’ve had a few different jobs before I discovered program evaluation and they all taught me lessons that I carry with me. Examples, in no particular order, include butcher’s assistant, 🐓💩 shoveler, call centre manager, policy analyst, fiscal forecaster and modeller, and flying instructor.
The flight training curriculum was designed to cumulatively develop students’ knowledge and skills so that, after about 50 hours of experience, they would be ready to sit the test for their private pilots licence. Each lesson consisted of a briefing at a whiteboard, followed by a lesson in the sky to learn practical skills and develop the muscle memory that makes controlling things like bikes, cars, and planes instinctive.
Something that’s stayed with me from that previous life is the realisation that some skills can only be learned, not taught. I can’t teach you how to land a plane but I can describe the process, sit next to you and keep you safe while you teach yourself.
Rubrics are like that. The process is simple enough to describe: Get the right people in the room. Facilitate a conversation about what matters and what good looks like. Organise the feedback into a matrix of criteria and standards. Refine it till everybody’s happy.
Bringing a rubric in for a nice smooth landing, though? That’s a muscle memory thing. You have to practice and learn how to do it your way.
Rubric development is an art and a science, requiring the right mix of stakeholders, knowledge and perspectives, cultural competence, facilitation, conceptual thinking, wordsmithing, and graphic design skills. It’s a team sport.
And just like landing a plane, it can get a little bumpy sometimes. Here, in a similar spirit to Kylie Hutchinson’s book, Evaluation Failures, are three examples of experiential learning that have shaped how I approach working with rubrics.
1. Too much detail
The first time my team and I developed rubrics, we pushed the boat out. We planned meticulously, got around a big table with all the experts, dived into the detail… and emerged some time later with 8 rubrics running to a total of 25 pages! Though carefully crafted, they contained too much detail, making them hard to use.
How did that happen? In hindsight, it was as if, instead of preparing a guide for making evaluative judgements, we were writing an algorithm that would enable a computer to make the judgements for us. Or as if we were drafting a watertight legal document with enough ifs and buts to eliminate every loophole.
Rubrics don’t make judgements - people do. What matters is that the judgements are explicitly and logically linked to evidence and rationale. Rubrics help us get there. They guide the process. They don’t have to be watertight. They do have to be co-constructed, concise and meaningful to stakeholders in the context where they are used.
We humans are comfortable holding a few (say, about 5-7? things) in our heads at once. We absorb plain language better than legalese. When we express our values in our own words, we like to see those words reflected in the rubric, even if the language is a little informal. Authenticity is more important than literary perfection. Rubrics represent the shared values of a relevant group of people for a particular purpose, so they’re negotiated and refined collectively. What matters is that the expression of shared values resonates with the group.
Lesson #1: If the rubric doesn’t fit on about a page or less, or if reading it feels like deciphering the small print of a rental car agreement, consider whether it may be too detailed.
2. Wrong process
Rubrics are more than a matrix of criteria and standards - they’re the product of an important conversation. How you facilitate that conversation depends who it’s with. There are lots of ways to facilitate group processes and we each have our own frameworks and preferences, and bring our own personalities to the work. Over time, we learn multiple approaches and get better at selecting a good approach for the circumstances.
I had a favourite rubric development process. Perhaps I was a little too attached to it. It involved projecting a word document up on a wall and facilitating a group conversation while paraphrasing key points and organising them into a draft rubric for participants to read, reflect and refine. Admittedly a bit of a party trick but also, I found, a good way to keep myself and participants engaged and focused.
However, no facilitation trick is for every occasion. One day it simply didn’t work. At the start of this particular workshop, we lost 15 frustrating minutes trying to get the computer to talk to the projector, and then once the tech started working we just couldn’t seem to get any traction developing a rubric. Keeping us on task felt like a tug-of-war till I remembered Facilitation 101 - know when to throw your plan out the window and try something else.
By this time I was ready to heave the projector out the window too, but it looked expensive so I gently switched it off, took a deep breath… and the conversation immediately relaxed into a self-managing focus group where I had the privilege of listening to the most insightful, wide-ranging conversation among a group of natural systems-thinkers. I took notes from which I sketched up a draft rubric back at my desk. We refined it together and moved ahead with the evaluation.
Building a rubric in real time with a laptop and projector is a great process if you’re with a group of stakeholders who are happy to think analytically, breaking concepts into pieces and organising them into lists and matrices. Not so much if you have a group who thinks holistically, seeing the big picture and making connections between things. Since then I’ve come to prefer the richness of more complexity-oriented processes, though I haven’t completely abandoned the old projector trick.
There are infinite strategies to facilitate rubric development. I shared some examples in a previous post. Often it’s good to start with a process that considers the context holistically before attempting to identify criteria.
Lesson #2: Do your homework on the group, tailor the approach to the context, bring your full bag of facilitation tricks and be ready to pivot.
3. Missing person
Rubrics articulate an agreed view of what good looks like. Who participates in developing the agreed view is critically important to the validity of the evaluation. That’s why we say “bring the right people together”.
Who those “right people” may be is of course a contextual thing, bearing in mind issues like validity, utility, credibility, voice and cost. Examples of stakeholders who often participate in rubric development include program architects, decision-makers, funders, delivery leaders and staff, people from the communities intended to benefit, subject matter experts, and end-users of the evaluation. The expert knowledge and multiple perspectives of different stakeholders strengthens the evaluation design and their participation fosters understanding and ownership of the evaluation.
In one project, the senior person responsible for commissioning the evaluation was too busy to get involved in it, and delegated the whole thing to a member of her staff. So essentially she had no input and no knowledge of the evaluation design. This activated an orange warning light on my mental project management dashboard but 🤷♂️ what’s an evaluator to do, right? Well, after this project, I upgraded the dashboard with a red flashing light and a siren.
Much later, when we presented our draft report, this person was *Not Happy* with our findings. Naturally, she took aim at our methods. In this case, rubrics were our salvation because we were able to demonstrate that they were co-constructed with stakeholders, represented an explicit and agreed basis for making judgements from the evidence, and that the findings were therefore not the subjective opinion of the evaluators but the logical conclusion of applying the rubrics to the evidence.
Lesson #3: The “right people” means all the right people. Those who are responsible for signing off on the final evaluation report should be present at the project inception meeting, should participate meaningfully in rubric development, and should give their explicit endorsement of the evaluation framework. I’ve learnt to be quite strident in my insistence on this. More on this in an earlier post.
Rubrics aren’t easy, but they’re worth it
Mastering the art of rubric development is a journey. It takes time and practice. Life is complex and so are programs, people, and rubrics. Expect the unexpected. Despite the learning experiences or “fails”, the value of rubrics is compelling.
Rubrics support clear reasoning at every step of an evaluation. They invite deep reflection on what quality, value and success mean in a program. They delineate the scope of the evaluation and help to clarify what evidence needs to be collected. They provide a framework for organising the evidence so it’s efficient to analyse. They provide a shared and agreed set of lenses for making sense of the evidence and for making evaluative judgements. They provide a structure for reporting findings on a no-surprises basis. Though rubrics are not the only way to do all this, I’ve yet to find a more practical, versatile alternative.
Fantastic reflections, Julian! Thanks so much for sharing
Really enjoyed this one Julian. "Rubrics don’t make judgements - people do," resonated strongly with me.
As for #3, I also think there's an onus on leaders to know what delegation involves. We do need all the right people, but it's also sometimes appropriate for senior management to take a backseat: provided they have the trust in their employees to do so. My personal experience of rubric/Theory of Change development is that it gets unweildly when you have more than six people in a room.