Thank you for the thoughtful reflections you shared in response to my post last week (Is evaluation violent?), on Substack, LinkedIn, direct messages and emails. Dr. Luke Roberts’ provocative keynote at the UK Evaluation Society Conference (The Violence of Evaluation: Reclaiming Evaluation for Complexity) clearly struck a chord. I’m returning to this topic now because your responses highlighted how nuanced and important it is.
When I shared last week’s post on LinkedIn (here), I included a poll. These polls don’t allow much room for subtlety. I posed a blunt question - and received a clear answer.

Epistemic violence
To sum up what I take home from your comments collectively, there's strong agreement that extractive, reductionist or context-insensitive evaluation can cause real harm, e.g., when it disregards participant rights, imposes external agendas, or marginalises forms of knowledge, especially when done to those with less power. This can happen whether intentional or not, so we need to be sensitive to the unintended consequences of our work.
There's a body of literature - supporting the predominant poll response - that describes this harm as epistemic violence. This perspective, drawing on theorists like Pierre Bourdieu and Gayatri Chakravorty Spivak, maintains that “violence” powerfully describes the real, often invisible injuries inflicted by knowledge systems that have lasting effects on individuals and communities.
This writing spans multiple subfields, including colonial and postcolonial studies, feminist theory and gender studies, peace and conflict studies, indigenous and critical race studies, science and technology studies, psychology, education, and evaluation. For examples, see Linda Tuhiwai Smith’s Decolonizing Methodologies and Kristie Dotson’s article, Tracking epistemic violence, tracking practices of silencing.
Contrasting examples
Evaluation, its misuse, and non-use, can have traumatic effects such as loss of agency, wellbeing, livelihoods, and lives. A particularly stark example is the impact of the large-scale cuts to USAID, which have threatened the continuity of life-saving health, food, and humanitarian programs in over 100 countries. These decisions, made in disregard of extensive evidence of need and effectiveness, are predicted to cause significant, preventable suffering at scale, among some of the world’s most vulnerable populations. While the full consequences are still unfolding, these cuts highlight how ignoring robust evaluation evidence can have far-reaching and deeply harmful effects.
Some other cases are more subtle. For example, reductionism - the practice of breaking down complex phenomena into simpler components - has benefits and risks. While reductionism can help to enable clear analysis and focused evaluation, it may also lead to harmful oversimplification by ignoring critical context, interactions, and emergent properties that only become apparent when we consider the whole system. To illustrate:
When urban planning is guided by a narrow focus on optimising car traffic, it often overlooks the needs of pedestrians, cyclists, and the broader community. This approach can compromise road safety, pollution, public health, community liveability and social cohesion.
Medical treatment that focuses on individual symptoms or diseases, rather than the whole person and their context, can overlook how conditions and life circumstances interact, potentially leading to conflicting treatments, missed diagnoses, poorer health outcomes and diminished quality of life.
Education systems that prioritise standardised test scores as the main measure of student achievement, breaking down learning into discrete, testable units, can lead to narrowed curricula (‘teaching to the test’) to the detriment of critical thinking, creativity, and social skills. Students who don’t excel in standardised tests may be unfairly labelled as underperforming, which can harm their confidence and future opportunities.
There's debate in the literature - and echoed in some of your comments - about whether "violence" is a bridge too far when describing situations like these. Does it enhance our moral and analytical clarity, or does it, like the fable of the boy who cried wolf, desensitise us to the gravity traditionally associated with the word?
What counts as violence should be defined by the receiver
My immediate reaction to Roberts’ presentation was that labelling all of evaluation’s various harms as “violent” dilutes the term’s meaning by grouping acts like misrepresentation and marginalisation with physical assaults, shootings and bombings. Some comments on my post echoed this concern.
On the other hand, the consequences of epistemic harms can be significant, and vary by context and perspective. As one colleague commented, “…epistemological violence is very real for those on the receiving end of it… this is not physical violence, but it’s just egregious, and has real and traumatic effects like poverty, ill health, higher death rates, etc.” Those who experience the harm have the right to name it.
Positionality matters in discussions about harm, violence and evaluation practices
Another commenter noted: “If the evaluations (& related writings, including yours) do not communicate clearly their world view they are epistemically violent”. I agree that we have an ethical responsibility to be transparent about our own perspectives - so let me briefly address mine.
My understanding of violence and harm is shaped by both personal experience and professional practice. I have been on the receiving end of various forms of physical and psychological violence, as well as unethical and harmful evaluation, which have heightened my sensitivity to the language we use to describe them. I regularly assist in evaluations of policies and programs that affect the lives of vulnerable and marginalised people, and I’m acutely aware of the potential for harm. I advocate for evaluation practices that are not only methodologically sound but also ethically attuned to those affected. I strive to use language that is precise, honest, and sensitive to the lived realities of others. I write from a position of intersectional privilege.1
Not all harms are equivalent, and we have a rich vocabulary to describe them
Claudia Brunner’s work frames epistemic violence as a structural feature embedded in the production and dissemination of knowledge, with profound implications for power and marginalisation. Importantly, Brunner also emphasises the need for conceptual rigour - urging us to critically examine and understand how the concept of violence applies in different contexts, rather than using it as a catch-all label or conflating epistemic with other kinds of violence.
This resonates with me because I value the use of a nuanced and precise vocabulary to convey meaning.
It’s clear from last week’s discussion that some people experience evaluation practices as violent. In my view, this makes it essential to acknowledge these realities and ensure they are included in the evaluation discourse.
At the same time, I humbly submit that when we try on the word "violent" for fit, we also carefully consider other adjectives that may capture the specific nature and impact of harmful evaluation in different contexts. To foster this precision, here’s a range of examples:
Alienating - e.g., the evaluation report used jargon that made participants feel excluded or misunderstood; “I didn’t recognise myself in that report”.
Betraying - e.g., the employer sent in a staff member under the guise of helping on a project, but with a hidden agenda to covertly assess the performance of a colleague.
Coercive - e.g., staff were pressured to sign consent forms under threat of losing access to benefits.
Controlling - e.g., the evaluator dictated every aspect of the project, leaving no room for community input.
Dehumanising - e.g., an evaluation coded a positive, life-changing, but unintended outcome, such as a typhoon survivor using a cash grant to escape domestic violence, as a “misuse of funds”.
Delegitimising - e.g., the evaluator dismissed local knowledge as unscientific and irrelevant to the findings.
Demeaning - e.g., Reviewer 2’s feedback was not only dismissive and sarcastic, but also failed to engage constructively or respectfully with the work, undermining scholarly dialogue.
Discrediting - e.g., the report implied that community leaders were unreliable without evidence to support the claim.
Discriminatory - e.g., the assessment tool penalised non-English speakers, resulting in lower scores for them.
Disempowering - e.g., community members were not allowed to review or comment on findings before publication.
Disenfranchising - e.g., only certain groups were invited to participate, excluding others from having a voice.
Dishonest - e.g., the evaluator promised anonymity but later shared identifying information with commissioners.
Dismissive - e.g., concerns raised by participants were brushed aside as irrelevant.
Disregarding - e.g., evaluators ignored feedback from people with lived experience, placing only their own interpretations on official data.
Domineering - e.g., the annual performance appraisal process was used as an opportunity for the boss to flaunt his dominance over employees, rather than to provide meaningful feedback or foster an open exchange of views.
Erasing - e.g., the final report omitted all references to indigenous practices discussed during interviews.
Exclusionary - e.g., the survey was distributed only to landowners, preventing renters from participating.
Exploitative - e.g., the evaluation team recruited participants from a low-income community, promising compensation, but paid them far less than the value their contributions brought to the work.
Expropriative - e.g., an evaluator collected stories and data from program participants about their experiences but reported only the challenges and shortcomings, without including participants’ voices or context; the report was used to justify cutting the program’s funding, despite the community’s opposition and without offering them a chance to respond or clarify.
Extractive - e.g., foreign evaluators conducted interviews in an indigenous community, harvesting stories and cultural knowledge, then published articles without sharing results, credit, or benefits with the community.
Instrumentalising - e.g., community members were only involved to fulfil a funder’s requirement, not as genuine partners.
Invalidating - e.g., when participants described their experiences, the evaluator insisted their perceptions were incorrect.
Manipulative - e.g., the evaluator selectively quoted participants to sway decision-makers.
Marginalising - e.g., the evaluation prioritised the perspectives of dominant groups, sidelining minority voices.
Neglectful - e.g., the evaluator failed to follow up on reports of harm shared during interviews.
Objectifying - e.g., participants felt treated as “cases” rather than as individuals with unique experiences.
Oppressive - e.g., the evaluation commissioner reinforced existing power imbalances by appointing a team of academic experts from the dominant culture to evaluate a program in an indigenous community.
Othering - e.g., the report described local customs as “exotic” and “primitive”, reinforcing stereotypes.
Paternalistic - e.g., the evaluator made decisions “for” the community without consulting them.
Pathologising - e.g., the evaluation framed cultural values as problems or dysfunctions that needed correction.
Patronising - e.g., feedback was delivered in a condescending tone, implying participants could not understand complex issues.
Reckless - e.g., despite robust evidence linking lower speed limits to reduced road deaths and injuries, authorities reversed speed limit reductions, prioritising convenience over public safety and putting lives at increased risk.
Reductionist - e.g., a cost-benefit analysis oversimplified a complex and contested issue, aggregating conflicting values, privileging monetisable costs and benefits, and sidelining important intangible values such as community wellbeing and cultural heritage.
Silencing - e.g., critical feedback was omitted from the final report to avoid controversy.
Stigmatising - e.g., the evaluation labelled certain groups as “hard to reach” without considering the government’s failure to offer relevant, accessible, and culturally responsive services.
Unaccountable - e.g., the evaluator refused to share results or answer questions about the methodology used.
Undermining - e.g., the report questioned the competence of local staff without justification, damaging their credibility.
Unjust - e.g., the evaluation’s recommendations disproportionately benefited already-privileged groups.
Unjustified - e.g., major changes were made to the program based on limited or flawed evidence.
Unresponsive - e.g., despite clear evaluation findings demonstrating the benefits of nutritious school meals, decision-makers opted for the lowest-cost provider, resulting in widespread food wastage, meals that were unappealing and nutritionally inadequate, and increased operational burdens on schools - undermining the program’s effectiveness for the students who needed it most.
Unwarranted - e.g., the evaluation judged program performance to be “poor” but offered no explicit criteria or standards to logically connect the evidence to the conclusion.
Violent. The best-fitting word may still be “violent” (or “violent and...”). That’s contextual, and not for me to decide. I include it here because for some, it might be the term that best conveys the depth or nature of harm they have experienced through evaluation. This list is intended to support precise, honest dialogue, not to rank or limit the language available to describe harm.
This list got longer than I expected, and still it is probably incomplete. Which terms best capture your own experiences of harmful evaluation practices?
As always, thank you for your engagement.
I’m a 7th generation New Zealander of mainly Scottish and English ancestry, middle-aged, middle-class, university-educated, cisgender. As a consultant, I have significant work experience as an outsider collaborating with insider colleagues, including evaluation in indigenous communities and with marginalised groups such as those facing mental health and addiction challenges. Professionally, I advocate for evaluation practices that are inclusive (involving stakeholders in co-design, fact-finding, and sense-making), clearly reasoned (synthesising evidence through explicit criteria and standards), and pluralistic (embracing interdisciplinarity, mixed methods, and bricolage approaches).
Sobering list.