Mixed Reasoning and Cubist Evaluation
Not as far-fetched as it may sound, and probably what many evaluators already do…
Reliable evidence is crucial in evaluation, but we need more than just evidence. We also need sound evaluative reasoning - the focus of this post.
When it comes to evidence, I think that viewing a subject from multiple angles and integrating different understandings can strengthen the validity of what (and how) we know. These arguments are well covered in the literature on mixed methods, bricolage, and eclectic methodological pluralism.1
I also think combining multiple viewing angles can strengthen evaluative reasoning - the theory and practice of making warranted judgements about the value of a policy or program. When it comes to evaluative reasoning, there’s more than one way to view it and do it. For example, Michael Scriven gave us the general logic of evaluation, arguing that evaluation involves judging merit and worth through the synthesis of criteria, standards, and evidence. Deborah Fournier unpacked different working logics connecting the general logic to a range of evaluation approaches. Jane Davidson introduced us to rubrics, a practical and intuitive way to implement the logic. Others, such as Thomas Schwandt and Robert Stake, argued that there’s more complexity and nuance to evaluative judgement-making than the general logic alone - such as critical thinking, intuition, responsiveness to context and stakeholders, and ethics. These perspectives are compatible, according to Gates and Schwandt, and I think so too.
Approaches to evaluative reasoning
Approaches to evaluative reasoning have been classified and catalogued. Here’s my synthesis of some of those taxonomies, in a diagram.
What this diagram illustrates is that:
Approaches to evaluative reasoning include tacit, all-things-considered, deliberative, and technocratic approaches. Tacit approaches are intuitive, holistic and follow a narrative construction. All-things-considered approaches compare and weigh reasons for and against an argument. Deliberative approaches reach a judgement through collective public reasoning. Technocratic approaches systematically combine evidence and explicit values (Schwandt, 2015).
Technocratic approaches to evaluative reasoning can be further broken down and include if-then statements, cost-benefit analysis, numerical weight and sum (aka multi-criteria decision analysis), qualitative weight and sum, and rubrics (although labelled ‘technocratic’, these approaches can be co-created and used in participatory or democratic ways).
Rubrics can take different forms including generic, analytic, holistic, or hybrids.
We talk about mixed methods… why not mixed reasoning?
Consider this scenario.
We need to evaluate the quality and value of an initiative addressing a complex social problem. We interview stakeholders (from decision-makers to people whose lives are affected) to find out what matters to them about the situation. We develop a survey informed by the interviews to canvass more people. We develop a draft rubric informed by analysis of interview and survey data. We run a workshop with stakeholder representatives to refine the rubric. We send the revised draft rubric to a wider group of stakeholders and incorporate their feedback.
Voila! We’ve used mixed methods to discover, conceptualise, co-design and finalise a coherent set of values: a statement about what matters (criteria) and what good looks like (standards). Our research into values has turned those values into facts - and our use of mixed methods has broadened and deepened our understanding of those facts.
Next, we gather and analyse evidence of program activities and impacts to address the statements in the rubric, including quantitative and qualitative data from observations, documents, administrative databases, interviews, focus groups, surveys, and a cost-benefit analysis. We now have rich evidence on the who, the what, the why, the how, how much and how good of the initiative from multiple methods and perspectives.
Then we start to make sense of this evidence, asking what we think we know and how we think we know it, using the rubric to guide deliberations and making tentative judgements about quality and value. To navigate ambiguity between different pieces of evidence we discuss what we’re seeing in general, outliers, contradictions, surprises, and conundrums. This goes beyond multiple parallel methods - we’re now mixing and triangulating evidence, deliberating on what it shows, what it means, how it should be weighed in the overall assessment, and what story it tells.
Initially, we do this as an evaluation team, to start getting our heads around it and to prepare a workshop with stakeholders. Subsequently, in the stakeholder workshop, we present evidence to stakeholders in a systematic fashion, addressing each criterion in turn, interrogating the evidence together and discussing implications such as the credibility and validity of the evidence, how performance should be judged according to the rubric, what’s been learnt and what could be improved.
In doing so, we’re combining a technocratic approach (analytic slicing and dicing of criteria, standards and evidence) with deliberative (collective reasoning with stakeholders), to consider, contextualise, debate and reach a set of all-things-considered judgements. Then we invite participants to stand back and check whether the judgements feel valid (and if not, why not) - tapping into their tacit evaluative capacities to identify any issues requiring further discussion or investigation.
Voila! We’ve just used reasoning strategies from all four domains of Schwandt’s taxonomy (2015), implementing Scriven’s general logic of evaluation through the use of Davidson’s rubrics while also employing broader strategies for judging value as argued by Stake and Schwandt, and perhaps even developing value as suggested by Schwandt and Gates. We’ve approached evidence (and ways of knowing) and values (and ways of valuing) through the multiple lifeworlds of different stakeholders and from multiple theoretical paradigms. All the while, keeping enough structure and focus to navigate our way systematically and efficiently from hard questions to clear answers.
This is just one scenario, not a prescription. Specifics are of course contextual. But it does broadly represent the sort of process I promote and aim for. It’s quite feasible and many evaluators may already do it. Mixed reasoning isn’t new, though perhaps we’re not always explicit about it
Mixed Reasoning and Cubist Evaluation
Mixed reasoning is part of my Cubist Evaluation proposal. Drawing inspiration from the early 20th Century Cubist art movement, Cubist Evaluation proposes that doing rigorous evaluation work in complexity involves:
Honouring multiple perspectives. Instead of depicting something from a single point of view, evaluation, like Cubist art, should seek diversity, depicting the subject from multiple perspectives to represent it in a greater context. Mixed reasoning can open up the space for a wider diversity of perspectives in evaluative sense-making.
Challenging dominant narratives. Cubists bravely challenged conventional and prevailing narratives and perspectives. So should evaluation, especially where dominant narratives marginalise people, ideas or inconvenient pieces of evidence. Deliberative and tacit approaches to evaluative reasoning provide avenues for balancing, ‘stress-testing’ and challenging conclusions reached through dominant technocratic approaches, exposing issues that may warrant deeper consideration.
Contributing new meaning through analysis and re-synthesis. Analogous to the way Cubist painters broke up and reassembled subjects and objects, providing not just another photorealistic picture but an abstraction challenging the beholder to see things in new ways, evaluation should contribute new meaning from the data as it moves beyond 'what's so' to 'so what'. A diversity of reasoning approaches can only enrich this potential.
Opening up possibilities. Cubism opened up almost infinite new possibilities for the treatment of visual reality in art. The Cubist movement had enduring impacts on art, architecture and film, and influenced later abstract styles such as constructivism. Similarly, using and mixing diverse approaches to evaluative reasoning can give people a greater sense of what's possible, empowered and equipped to make meaning and communicate value.
Overall, Cubist Evaluation seeks to challenge and disrupt conceptions of rigorous evaluation, through:
Thinking beyond measurement to mixed methods, bricolage, and eclectic methodological pluralism
Thinking beyond methods to evaluative thinking, evaluative reasoning, and mixing different reasoning approaches
Thinking beyond objectivity to inter-objectivity, subjectivity, inter-subjectivity and collective sense-making - supporting pluralistic methodologies and reasoning through the combination of individual and collective, external and internal perspectives.
Mixed reasoning is a means to an end
Combining different reasoning strategies can strengthen the validity of evaluative conclusions, just as mixing methods can strengthen the validity of evidence.
Whatever mix of approaches is used to make evaluative judgements from evidence, people are responsible for making the judgements. The different reasoning strategies are just supports to help us reach well-thought-out judgements. Using more than one supporting strategy adds fresh viewing angles and opportunities for checks and balances.
Ultimately, as Michael Patton (p. 18) said, evaluation is “not first and foremost about methods, but is about making sense of evidence and creating a coherent, logical, and, ultimately, if successful, persuasive argument about what the evidence shows”.
For related topics see:
Mixed methods, bricolage and eclectic methodological pluralism are distinct but overlapping concepts at a philosophical level. From a practical perspective, just bring all your tools and tricks and combine them in contextually responsive ways. Stilton, Emmental and Pecorino are all cheese to me.