Another simple framework for distinguishing different levels of complexity is the Emérgéntly Uncertainty Spiral, which uses the more intuitive terms:
Best Practice (no uncertainty - it has been done before)
Disciplined Practice (some uncertainty that with patience can be understood)
Shared Practice (high uncertainty, but there is some general agreement)
Next Practice (total uncertainty other than what doesn't work)
The level of uncertainty determines the type of evaluation that is appropriate. With Next Practice (chaos), mistakes are inevitable, but what is their strategy for avoiding repeat mistakes and remembering what worked?
I think your framing of the Stacey Matrix is helpful, particularly its application to informing rubric development. It reminds me that in reality we often find evaluands that are only fit-for-simple, despite the complex or chaotic context in which they are operating, and (at times) evaluators are expected to evaluate according to fixed performance measures that leave little flexibility. Its wise counsel to recommend rubrics that are nuanced and sensitive to the reality in which the evaluand is operating.
This is very useful. It explains why it was so difficult for us to develop a rubric for a complex and chaotic program involving many stakeholders with different views and interests. Yet, it was so easy for our program team to agree on a more detailed rubric because we all know the context, and we have an aligned mission.
Evaluative reasoning in complexity
Some thoughts of mine on Stacey and Cynefin matrices, from 13 years ago https://mandenews.blogspot.com/2010/08/test3.html
Another simple framework for distinguishing different levels of complexity is the Emérgéntly Uncertainty Spiral, which uses the more intuitive terms:
Best Practice (no uncertainty - it has been done before)
Disciplined Practice (some uncertainty that with patience can be understood)
Shared Practice (high uncertainty, but there is some general agreement)
Next Practice (total uncertainty other than what doesn't work)
The level of uncertainty determines the type of evaluation that is appropriate. With Next Practice (chaos), mistakes are inevitable, but what is their strategy for avoiding repeat mistakes and remembering what worked?
I think your framing of the Stacey Matrix is helpful, particularly its application to informing rubric development. It reminds me that in reality we often find evaluands that are only fit-for-simple, despite the complex or chaotic context in which they are operating, and (at times) evaluators are expected to evaluate according to fixed performance measures that leave little flexibility. Its wise counsel to recommend rubrics that are nuanced and sensitive to the reality in which the evaluand is operating.
This is very useful. It explains why it was so difficult for us to develop a rubric for a complex and chaotic program involving many stakeholders with different views and interests. Yet, it was so easy for our program team to agree on a more detailed rubric because we all know the context, and we have an aligned mission.