VfI meets TM: Monitoring that actually matters
How to rescue monitoring from managerial boredom
This is the story of how a meeting at Borough Market between Daniel Ticehurst and Julian King in London sparked an idea: What would it look like if we integrated Julian’s Value for Investment (VfI) with Daniel’s Thoughtful Monitoring (TM)?
The purpose of monitoring is deceptively simple: to improve implementation performance and prospects for the lasting impact of social programs. Monitoring is a core management function. But if you’re a manager looking for guidance, good luck! The resources are thin. You’ll find more manuals on evaluating programs than running them well. In the aid world, monitoring is the awkward cousin to evaluation; compared to the prestige, resources, and intellectual gravitas afforded to evaluation, monitoring has been left to wither.1 You’d think figuring out how to make programs work through continuous monitoring and reflection would matter as much as judging their merit and worth. Implementation is an understated task.
The problem starts at the top. Too often, agencies and consulting firms equate “management” with internal controls - budgets, risk registers, procurement, and report-signing - and often ignore managing the relationships between programs and affected communities. Monitoring is too often mistaken for a purely administrative exercise that generates periodic reports - typically a bland catalogue of activities and movements in the relative values of indicators that describe performance without explaining it, leaving managers with little insight into what’s working or what to fix. Rather, this is left in abeyance to evaluators, and often too late.2
So the real question is this: In a sector obsessed with proving its worth, why are we so reluctant to track it in real time?
This blog is a call to overhaul that thinking. Riding the wave of interest in “value for money” (VfM), we argue that monitoring and VfM assessment should be inseparable - two sides of the same coin rather than two separate ‘systems’. We’ll be better placed to claim value when we know, in real time, whether we’re delivering it.
What started in Borough Market as a casual chat swapping backgrounds soon shifted into something more catalytic. Julian spoke of Value for Investment (VfI), a disciplined way to make transparent, reasoned judgements about the worth of an intervention. Daniel described Thoughtful Monitoring (TM), a practice rooted in listening, relevance, and real-world use.
We quickly realised we were circling the same frustrations: rigid, dogmatic approaches to VfM and monitoring; efficiency worship; managerial and, especially, evaluation jargon; the hegemony of methodology as a form of panacea; and an almost mystical faith in indicators as “objective” truths.
We traded stories of watching complexity being flattened into simplistic scorecards or, conversely, made so abstract as to be incomprehensible, and of methods used more to please funders than to serve the people programs were meant to support. Yet, beneath our different styles and backgrounds, we heard the same through-line in each other’s work: a stubborn insistence on holding space for critical thought, for mixed methods evidence, and for monitoring to help astound or reveal complexity rather than reducing it to abstractions.
At the end of the conversation, the outlines of this blog emerged. If Julian’s VfI could bring structured evaluative reasoning, and Daniel’s TM could ensure monitoring stayed relevant, human, and adaptive, then together they might just build an approach that was credible without being rigid, participatory without losing independence, and useful to everyone, not just funders.
Borough Market had served its purpose. We left not just as colleagues, but as co-conspirators.
What is Thoughtful Monitoring?
Thoughtful Monitoring (TM) is an approach to monitoring in development and humanitarian aid that prioritises learning, adaptation, and accountability to the people programs aim to support, rather than primarily to funders. It rejects overemphasis on indicators (quantitative or qualitative), rigid results frameworks and/or needy theories of change, instead encouraging managers to embrace uncertainty, interrogate assumptions, and integrate monitoring into everyday decision-making processes.
This approach recognises that in complex systems, the most important information is often unknown or immeasurable.3 It also stresses that the legitimacy of social programs lies in the perspectives and knowledge of those intended to benefit. Rather than being a parallel, technical function run by isolated “monitoring & evaluation” (M&E) teams, monitoring should be embedded with management, operations, and planning, ensuring rapid feedback loops and responsiveness to evolving needs.
At its core, TM is about creating spaces for open communication - between managers, frontline staff, and communities - so that experiences, challenges, and successes can shape implementation in real time. It values indigenous knowledge systems, acknowledges the diversity of contexts in which programs operate, and treats monitoring as a means to strengthen relationships and trust with those most affected by interventions.
TM is highly relational. By avoiding the pitfalls of siloed systems, M&E specialists, and overly technocratic/academic approaches, TM shifts the purpose of data collection from compliance to meaningful use, enabling organisations to adapt, improve performance, and remain accountable to the people they serve.
Features of TM include:
Focus on integration: Many problems programs face in monitoring are neither methodological nor technical, but rather organisational and managerial. They centre around the object, positioning of and responsibilities for monitoring. Monitoring needs to be integrated into the structures, functions and processes of a program or organisation.
Use mental tools:4 Ask how monitoring can benefit those responsible for implementing programs, including identifying their information requirements and understanding the decision uncertainties they face.
Context-responsive, adaptive monitoring: Ensure that the nature, function, and processes of monitoring reflect the complex and uncertain operating environment, in particular the need for rapid feedback loops and the consequences of this for decision-making and operations.5
Design for complexity: Accept that the most important information in monitoring interventions in complex systems may be unknown. Therefore, afford attention and effort to adequately researching and periodically examining assumptions.
Learn from indigenous knowledge systems: Recognise how monitoring can support learning from, and bringing material value to indigenous knowledge systems which, in contrast with western societies, have embraced complexity for centuries.
Responsiveness to local values: Balance the need to be accountable to program funders with the need to adapt monitoring based on conversations with those intended to benefit from programs about what matters to them.
Give voice to those delivering support, so their experiences, successes and challenges inform senior management decisions and program improvements.
Without TM, evaluation risks being disconnected, retrospective, and constrained - a technical report on a partial story. With TM, evaluation can become more dynamic, useful, and alive to the complexity of change.
What is Value for Investment?
Value for Investment (VfI) is an evaluation system designed to help people determine how well resources are used in public policies and programs, whether sufficient value is created, and how more value could be generated from the resources invested. VfI integrates evaluative and economic thinking to create comprehensive, practical, and context-responsive frameworks for assessing resource use and value creation.
The VfI approach treats policies and programs not simply as costs, but as investments in value propositions. By defining the value proposition, we can evaluate how well it’s being met. VfI is underpinned by evaluative reasoning - collaborating with stakeholders to articulate a framework of agreed criteria (aspects of value) and standards (levels of value), and using this framework as a scaffold to: select and justify an appropriate mix of methods; organise, analyse, and synthesise evidence; make and communicate transparent judgements.
The VfI approach is underpinned by four key principles:
Interdisciplinarity: Combining theory and practice from evaluation (determining value) and economics (studying resource use) to provide better answers to VfM questions.
Mixed methods: Quantitative and qualitative methods, chosen contextually and combined thoughtfully, help assess value from multiple perspectives and understand the story behind the numbers.
Explicit evaluative reasoning: Evaluative judgements are guided by contextually-determined criteria and standards and deliberation (often but not necessarily implemented with the help of rubrics; often but not necessarily informed by the 5Es as a starting point for scoping criteria).6
Participatory: Power sharing and collaboration with stakeholders inform evaluation design, fact-finding, and sense-making. This ensures monitoring and evaluation reflect the values of people with a right to a voice, and those involved in program design and delivery. Participatory evaluation also promotes evaluation use, with the intent that VfM assessment should be used for learning and adaptation, in addition to accountability.
More broadly, VfI evaluations should be guided by principles of evaluative thinking and program evaluation standards, and may aspire to the ideals of Cubist Evaluation - contributing new meaning by deconstructing and reconstructing ideas (analysis and synthesis), challenging dominant narratives, honouring multiple perspectives and opening up new possibilities.
Points of commonality
Both VfI and TM are rooted in a shared understanding that successful programs are: deeply attuned to their contexts; designed, implemented, monitored and evaluated with or by stakeholders (not done for, or worse, to them);7 and relentlessly focused on real-world use, value, and improvement.
Contextual insight means recognising that no two environments, communities, or delivery settings are the same. Both approaches insist that evidence and judgements of value must be grounded in a local understanding of what matters, what’s possible, and how change occurs. This pushes us to go beyond one-size-fits-all frameworks and to co-design solutions and frameworks suited to real people in real places.
Stakeholder engagement is a foundation for both approaches. TM and VfI both value power sharing and participation, involving those delivering support and those intended to benefit. Meaningful engagement creates buy-in, surfaces blind-spots, and ensures that both monitoring (TM) and evaluation (VfI) are inclusive and collaborative. Shifting the balance of power ensures programs reflect the priorities and lived realities of those impacted.
Use-oriented monitoring and evaluation lies at the heart of both approaches, with a view to generating insights that decision-makers can act on, not just reports for compliance sake. The aim is to ensure evidence is timely, relevant, and translated into changes in practice or policy where they count most.
Both approaches champion learning and adaptation as ongoing, active processes. Rather than waiting for a final report years after implementation, they build in feedback systems, encourage ongoing reflection, and create space for course correction as new information emerges or circumstances change. This keeps programs adaptive, resilient, and better able to deliver on their value propositions.
Both approaches put value before compliance, refusing to settle for “good enough” or just ticking funder boxes. VfI and TM recognise that the real test of any program is whether it delivers meaningful value for those that matter most, and leaves the systems they live in and depend on and people stronger in the long run.
The case for combining TM and VfI
We see four key arguments for combining these two approaches:
From measurement to meaning: VfI makes evaluative reasoning explicit to assess whether an intervention is delivering on its value proposition. TM asks a related and vital question: is this meaningful to those involved? By combining them, M&E could shift from ticking indicator boxes to forming shared, context-rich judgements about what’s worthwhile, uniting formal credibility with grounded utility.
Adaptation in complexity: Complex, uncertain environments demand both discipline and flexibility. VfI thrives where trade-offs and contested values must be navigated. TM keeps systems light, responsive, and people-centred, ensuring data remains relevant as conditions shift. Together, they balance analytical rigour with nimbleness, building monitoring systems that themselves adapt without losing accountability.
Mutual accountability: While VfI strengthens transparency through explicit evaluative reasoning, mixed methods and stakeholder participation, TM challenges top-down accountability by fostering shared responsibility among communities, practitioners, and funders. Merging these perspectives creates a model where accountability is both multidimensional and mutual, clear enough for funders, and responsive to frontline realities and shifting priorities.
Better questions, richer evidence: VfI starts with, What constitutes good value for the resources invested?, while TM begins with, Who is this for, and why do they care? The combined approach ensures not only better answers but also better questions, rooted in shared values, local relevance, and multiple ways of knowing. It values quantitative and qualitative insights equally, blending stories, lived experiences, and relational evidence with structured analysis. The aim is an integrated M&E system that is credible without being rigid, flexible without becoming vague, and genuinely useful for everyone it aims to serve.
What could this actually look like?
How to combine them? We can think of three possibilities, each with merit in specific circumstances:
TM with a light side-serving of VfI principles: serving the purposes of TM in contexts where aspects of VfM like economy and efficiency are being monitored. Here, for example, VfI could add to TM by contributing robust conceptual ways of dealing with VfM in a monitoring context.
VfI with a light side-serving of TM principles: serving the purposes of a VfI evaluation that includes regular (say, quarterly) monitoring of indicators (including VfM indicators) in between evaluations. TM could add to VfI by contributing practices and orientations to monitoring that ensure each cycle of monitoring and reporting is genuinely useful and gets used.
True integration of TM and VfI: serving the shared purposes of a fully integrated thoughtful monitoring, evaluation and learning system.
Here, we will focus on the third and most comprehensive of these options.
In practical terms, we think this would involve adapting VfI’s 8 steps to design a wholistic TM-VfI system, with a shared theory of change, value proposition and a common language across the whole endeavour, and routine data collection that can be monitored evaluatively and can be used as contributory source of periodic data for evaluation teams.
This stepwise framework was originally designed to guide the design and implementation of any evaluation that addresses VfM questions. However, it can be adapted to other contexts. It’s a road map, not a straitjacket, allowing space for reflective, reflexive practice. It distinguishes methods (how we gather evidence - steps 4, 5, and 6) from reasoning (how we make evaluative judgements - steps 2, 3, and 7). It starts by investing time in understanding the program and its context (step 1) and ends with guidance on communicating findings (step 8). It’s designed to be inclusive, involving stakeholders at each step.
In the text below, we unpack what this process would look like in a combined TM-VfI system.
Step 1: Understand the program
Shared orientation, collaborative judgement, context-aware comparisons
Begin with a shared orientation involving program teams, stakeholders, and intended beneficiaries to clarify TM-VfI objectives, key questions, and boundaries between activities. Lay strong foundations by developing a coordinated approach - e.g., clarifying the Theory of Change, value proposition, and setting priority monitoring and evaluation questions. Go beyond these elements, and any other tools like logframes, to ensure a deep understanding of the program’s purpose, context, and especially the realities of affected communities and frontline staff. Make assumptions explicit, explore how and why people will respond and benefit, and consider how these affect the program’s performance and the value it generates.
Beyond the considerations listed in the diagram, TM also focuses during step 1 on foundational system, management and staffing/responsibility issues, such as:
Helping ensure financial and non-financial monitoring are integrated, not developed as separate systems (for example, by applying activity-based costing to unify data sources and analysis)
Embedding the monitoring system within the organisation’s delivery and decision-making processes, so it functions as a practical part of operations rather than a standalone, artificial mechanism
Recognising that the primary responsibility for learning and adapting - especially in complex, uncertain environments - rests with management. TM emphasises this as a core leadership function, rather than delegating it to MEL or external learning partners.
Steps 2-3: Criteria and standards
Collaboration with stakeholders to surface and articulate what matters
Working collaboratively with the monitoring and evaluation teams and stakeholders (including those affected by, and those who affect the program), define clear criteria for good resource use, delivery quality, and value creation, ensuring criteria reflect the priorities of those who will use the information and those impacted. Map out criteria together, distinguishing criteria to be addressed through regular monitoring and at specific points in the program cycle. Avoid externally-imposed benchmarks that lack contextual relevance.
Translate each criterion into clear, meaningful performance levels - such as excellent, good, adequate and poor, or an alternative nomenclature - that makes sense to both program staff and stakeholders. Standards should be realistic yet aspirational, encouraging improvement while recognising contextual constraints. They create a basis for evaluative monitoring, enabling the program to move beyond dashboards to reasoned judgements.
Revisit criteria and standards periodically in reflection sessions with stakeholders, to ensure they remain relevant and credible.
Step 4: Evidence needed
Decision-useful evidence; learning loops
Identify the evidence necessary, credible, and acceptable to stakeholders and users to assess performance and value against the agreed criteria and standards. This includes both quantitative data and qualitative insights, as well as local and indigenous knowledge. Determine approaches to causality or contribution for outcome evaluation. Plan what mix of methods will be used, who will collect the evidence, and when. Design the process to serve all shared and specific TM-VfI purposes, while keeping it decision-focused and respecting the principle of exchanging rather than extracting information from affected people.
Steps 5-6: Gather and analyse evidence
Learning loops; context-aware comparisons; interpret insights collaboratively
Collect evidence using methods that promote dialogue, co-creation, and mutual learning, treating communities and stakeholders as active participants and subjects in conversations on issues that matter to them, not as objects on issues that matter to the interviewer.
Evidence may be gathered at different intervals for different purposes - for example, quarterly monitoring for operational insight and annual reviews for cumulative value assessment. Ensure methods are appropriate for context, complexity, and the need for rapid feedback loops. High-quality monitoring data from TM could provide evaluators with up-to-date, granular information that can help to address evaluation criteria, supporting robust, consistent judgements of progress. For example, regular monitoring can highlight strengths and areas for improvement in real time, supporting well-informed evaluative judgements and actionable recommendations.
Work with affected people and communities, frontline staff, and managers to explore and interpret findings collaboratively, rather than analysing data in isolation. This joint sense-making allows the team to surface patterns, explain variations, test assumptions, and uncover unexpected results. It ensures conclusions reflect lived realities and strengthens the credibility and usefulness of findings for both operational and strategic decisions.
Step 7: Synthesis and evaluative judgements
Learning loops; adaptive resource use; context-aware and collaborative reasoning
Bring together multiple evidence streams to form evaluative judgements, guided by the agreed criteria and standards. Apply different criteria at different intervals, balancing operational monitoring and longer-term evaluation objectives. Treat synthesis as “a collaborative, social practice” in which criteria and standards are democratic tools as much as technical. The real value of this step lies in inclusive conversations that make meaning from the evidence and inform next steps. Remain open to revisiting conclusions as new insights emerge.
Step 8: Reporting
Decision-useful insights; clear answers to important questions; learning loops
Share findings in formats and forums that are tailored to the primary users or decision-makers, program teams, and affected people and communities. Ensure reports answer the key questions directly and clearly. Present a coordinated, coherent set of reports with transparency about methods, evidence, and limitations. Make reporting a catalyst for action and learning, not a satisficing paperwork requirement. Report transparently and in accessible formats, facilitating ongoing dialuge and adaptive response across all audiences.
What are the potential benefits of TM-VfI?
Combining the two approaches is an opportunity to help program teams and organisations to:
Avoid developing parallel systems and unnecessary duplication of effort that impede rapid feedback loops and delay decision-making
Ask better questions, informed by what matters to stakeholders and rights-holders
Include diverse perspectives, by actively engaging different stakeholders and valuing their lived experiences
Reflect on what matters, keeping programs responsive to evolving needs and priorities
Clarify the differences between what monitoring and evaluation require - and meet those requirements in a coherent way
Adapt in real time, using TM insights to improve implementation without waiting for “final” decisions
Break down how programs and systems evolve in practice, reflecting on the context and operating environment
Make better decisions, based on shared judgements about what is valuable and why.
Combining VfI with TM creates a credible, adaptive, and context-aware approach. It balances rigorous evaluative judgement with participatory, learning-centred monitoring, helping programs respond to complexity, stakeholder priorities, and local realities.
Bottom line
Value for Investment tells us how to judge resource use and value; Thoughtful Monitoring ensures the evidence for those judgements is continually generated, used, and embedded into practice. Combining the two can create a system where evaluation isn’t an occasional event but an ongoing way of working, and where monitoring isn’t a compliance activity but makes a genuine contribution to the intel that underpins sound judgements. Ultimately, this is about culture change - not following a recipe.
Thanks for reading!
This blog represents the authors’ opinions and doesn’t represent other people and organisations we work with. It explores a hybrid model that we think has merit in theory but (as far as we know) hasn’t explicitly been tested in practice - though, perhaps for some, e.g., Developmental Evaluation practitioners, is it in some ways reminiscent of a normal day’s work? It’s intended as a conversation-starter - so please converse!
About the authors
Daniel Ticehurst supports people to design and use thoughtful monitoring systems that centre local perspectives, promote learning, and strengthen real-world impact. Check out Daniel’s Substack here.
Julian King helps people use evidence and explicit values to make good decisions, through VfI training, coaching and advice. Check out Julian’s Substack here.
Ticehurst, D. (2013). The awkward cousin: Why monitoring remains less valued than evaluation. Blog Post, Better Evaluation. [No longer available online]
Ticehurst, D. (2013). Who is listening to who, how well and with what effect? Paper presented at the International Development Evaluation Association Conference, Barbados.
W. Edwards Deming. (1982). Out of the Crisis. Massachusetts Institute of Technology Press.
Garbutt, A. (2013). Monitoring and Evaluation: A Guide for Small NGOs (INTRAC Toolkits, No. 2). Oxford: INTRAC.
Snowden, D. (2020). Building Scalable Organizations that can Deal with Incertainty - with Dave Snowden. Boundaryless Conversations Podcast, Season 2, Episode 5.
The 5Es are economy, efficiency, effectiveness, cost-effectiveness, and equity. The 5Es are good conversation starters but they're not the full list of potential VfM criteria. If you use the 5Es, they should be defined in context-specific terms. This guide outlines how.
Wehipeihana, N. (2019). Increasing cultural competence in support of indigenous-led evaluation: a necessary step toward indigenous-led evaluation. The Canadian Journal of Program Evaluation. Vol. 34 No. 2.