Navigating donor wants and needs in evaluation - Part 1
The case of the 1000 Days Fund
By Julian King and Zack Petersen
When philanthropic donor demands distort the mission: the real cost of impact theatre
The international development sector has a dirty secret that few want to discuss: philanthropic and private donors’ demand for measurable impact can systematically warp programs away from their purpose, creating elaborate performance theatre that prioritises fundraising over change. The 1000 Days Fund's work in Indonesia offers a powerful counter-narrative to this issue, but even the most effective organisations face pressure to conform to grant-maker expectations that fundamentally misunderstand how sustainable change happens.1
When indicators miss the point: methods, misuse, and meaning
Let's start with the absurdity that sparked this reflection: a California-based philanthropic organisation was recruiting a program associate. The application asked candidates to estimate the Social Return on Investment (SROI) of a hypothetical grant opportunity, in just 25 minutes.2 The exercise demonstrated everything wrong with common funder evaluation practices: it treated SROI calculations - comparing estimated benefits with estimated costs - as if they were meaningful without any theory of change, stakeholder mapping or inclusion, causal inference, time horizon, discount rate, no transparency around how monetary values are assigned to outcomes and costs, and no treatment of uncertainty.
The formula for calculating the benefit-to-cost ratio wasn’t even correct. This isn't rigour; it's cargo cult evaluation masquerading as quantification.
SROI and social cost-benefit analysis (CBA) are powerful methods when implemented well, offering informative insights into program value. However, poor application too often results in static numbers that provide little practical guidance - or worse, misleading figures that can skew decision-making.
This is not a shortcoming of the methods themselves but how they are sometimes executed, with different analysts choosing different outcomes or values, and a lack of transparency about these analytic choices, making results highly variable. This increases the risk of “impact washing”, performative analyses with the appearance of being rigorous and professional, designed to impress funders rather than provide valid information to support improving vulnerable people’s lives.
While implementation failures are a core problem behind subpar, opaque SROIs and CBAs, the grant-making environment also plays a role: when funders demand simplified metrics without sufficient attention to methodological quality, they incentivise shallow analyses instead of supporting the thoughtful use of these valuable tools to make a real difference. In other words, what passes as accountability can sometimes erode learning. Real accountability should strengthen the capacity of organisations to understand and improve their impact, not pressure them into producing numbers that look good but tell us little.
Founders don’t start programs to produce performative impact indicators - they’re out to make a difference. Philanthropic donors want to make a difference too. Founders and donors alike want to use resources well and demonstrate to their stakeholders that their actions are having an impact. Yet despite these shared intentions, the relationship between donors and programs is often strained by evaluation demands. For example, requirements for quantifiable, easily communicated impact can misalign with the realities of complex social change, generating distortions in organisational behaviour, significant administrative burdens, and a tendency to favour what is measurable over what matters.
The 1000 Days Fund: a different model
The 1000 Days Fund exemplifies a program dedicated to making a tangible difference. Founded to address stunting - a condition in which children experience impaired growth and development, primarily because of long-term poor nutrition, repeated infections, and lack of adequate care - the organisation focuses on one of Indonesia’s most pressing health challenges. Medically, a child is considered stunted if their height-for-age is more than two standard deviations below the World Health Organization’s (WHO) Child Growth Standards median. Stunting affects 1 in 3 Indonesian children during their first 1,000 days of life - and it cannot be reversed, making early intervention crucial to prevention.
Since its founding, the 1000 Days Fund has trained over 61,000 community health workers across more than 100 sub-districts of Indonesia. Its approach centers on building sustainable village-level health systems - an effort not easily reduced to simple metrics.
The 1000 Days Fund stands out for its focus on equity of outcomes rather than aggregate numbers. It works to reach the hardest-to-reach populations, including women on remote islands like Rinca and Komodo. Their low-cost Smart Charts, produced at less than a dollar per unit, are an example of human-centered design: a tool that helps community health workers explain stunting to caregivers while enabling families to monitor child development.
The 1000 Days Fund was born of a World Bank pilot on a handful of islands inside Komodo National Park, testing the hypothesis that trained and confident community health workers held the key to eliminating maternal malnutrition and stunting. Over the last 6 years, the initiative has expanded its reach dramatically, supporting nearly half a million in-home malnutrition screenings.
Following a series of meetings where the 1000 Days Fund shared its evidence from work across 5 districts in East Nusa Tenggara (NTT), the Minister of Health issued a personal directive requesting the 1000 Days Fund to scale operations across East NTT - a region recognised for having some of the world’s highest stunting rates. In response, the 1000 Days Fund is now exclusively focused on NTT and is mobilising resources to expand into all 22 districts in the province.
The scale of the challenge is considerable. NTT’s demographic profile - 1.4 million women of reproductive age, 700,000 children under five, and 54,000 community health workers - makes it comparable in size to countries such as Liberia. Meeting this challenge reflects the scale, ambition, and partnerships now required to address stunting at both a systemic and sustainable level.
The 1000 Days Fund collaborates with local health workers to create six-month plans for enhancing maternal, infant, and child nutrition and hygiene. This approach acknowledges that sustainable change requires building local capacity rather than imposing external measurement frameworks that may not reflect community realities.
The distorting effects of donor demands
The evaluation requirements of some philanthropic and private donors are known to distort organisational behaviour. NGOs are perversely incentivised to design their monitoring and evaluation (M&E) systems primarily to meet donor needs rather than community needs. Organisations may, for example, focus only on easily measured activities, set easily reached targets, or prioritise short-term outputs over longer-term sustainable change.
The competitiveness of grant-seeking creates added pressure on NGOs. To secure funding, leaders sometimes feel compelled to propose unrealistic objectives. This dynamic fosters defensiveness and, at times, dishonesty, with NGOs overstating their results or failing to recognise the contributions of others to shared outcomes.
Multiple donor requirements compound these distortions. Many NGOs have to juggle overlapping indicator sets, reporting formats, and compliance standards - sometimes for a single project. Even when funding comes from just one organisation, internal accountability processes and coordination frameworks can multiply reporting obligations. The result is an endless cycle of data entry, audits, and revisions that consumes capacity better spent on changemaking. Worse still, donors frequently request ad hoc information mid-implementation - details not specified in the initial agreement - forcing organisations to divert scarce time and resources to satisfy new reporting demands. The paperwork spiral is relentless, turning what should be real learning and adaptation into exercises in bureaucratic survival.
Why indicators aren’t enough
One of the fundamental issues is that philanthropic donors often want simple answers to complex questions. They desire evidence-based headlines - something approaching a bumper sticker saying "it works" while viewing monitoring and evaluation (M&E) as an opportunity cost rather than a strategic investment in better impact. This creates pressure for organisations to reduce their rich, multifaceted work to neat numerical summaries.
But sustainable development doesn't operate according to these simplified logics. How do you quantify the empowerment of a health worker who becomes a trusted knowledge source in her village? What's the benefit-cost ratio of a community's strengthened capacity to care for its children? These outcomes matter because they endure, ripple outward, and reshape systems - but they resist easy measurement. Indicators can tell part of the story, but they don’t define value.
A narrow fixation on quantification misses deeper forms of value creation. Organisations tackling complex social issues create impact across many dimensions including health, education, empowerment, equity, and sustainability. When evaluation frameworks focus only on what can be easily counted or costed, they risk missing the systemic changes that matter most and last longest. This isn’t an argument against indicators - it’s an argument against allowing them to define the boundaries of evaluation.
Donor psychology and power dynamics
Understanding why this distortion occurs requires examining philanthropic donor motivations and constraints. Donors genuinely want to make a difference and use their resources well. They also want to demonstrate to their stakeholders that they're creating impact, which sometimes creates tension between evidence and "bragging rights". When evidence doesn't live up to expectations, there may be organisational resistance to sharing or even believing unfavourable findings.
Donors also possess significant power in these relationships. Their funding comes with expectations that can pressure organisations to compromise or fundamentally change their approaches as the price of financial survival. Here’s an illustration that will make you laugh and cry. This dynamic is particularly problematic when donor evaluation preferences reflect common but questionable assumptions - such as the belief that numbers are inherently more objective than stories, or that cost-benefit analysis (or similar) represents the "gold standard" for determining value.
These perceptions risk creating a dangerous narrowing of vision. The pressure to show quantifiable impact has led to wasted resources, weakened monitoring systems in favour of questionable “vanity metrics”, and a rise in poor or even misleading approaches to demonstrating results. As a consequence, organisations may gather more data than they can meaningfully analyse, measure change without assessing causality, or produce evaluations that mislead decision‑makers and compromise future choices.
The measurement trap
The overemphasis on impact measurement creates several specific problems. Organisations may engage in publication bias, only revealing results that funders want to hear, while over-claiming outcomes. The competitive funding environment rewards organisations that can market themselves effectively rather than those that achieve genuine change or demonstrate learning from failures.
Bad evaluations can harm good work. They may overlook effective programs while wrongly funding ineffective ones, and they divert resources that could have supported implementation or direct services. Many organisations fall into the trap of collecting data that they lack resources to analyse properly, resulting in wasted time and effort.
The absence of integrated data systems is a common barrier in evaluation, repeatedly undermining the rigour and utility of SROI and similar analyses. Without consistent systems for data capture, storage, and sharing, organisations have to cobble information together from various and sometimes incompatible sources. Such a fragmented and duplicative data system not only increases manual workload and error risk but also isolates evaluation from day-to-day management, making it harder to generate timely insights. As a result, monitoring and evaluation become disconnected exercises - retrospective and administrative, rather than dynamic tools for learning and adaptive decision-making.
Never fear: we have a solution - but you’ll have to wait for it
Critique alone isn’t enough. The sector needs a way to reclaim meaning without losing rigour. A constructive way forward is to introduce a framework that enables us to make meaningful sense of diverse evidence. Such a framework provides an explicit logic for defining the value proposition of a social investment and assessing how well that proposition is being met. It offers a systematic and transparent way to bring together insights from economics and evaluation, including quantitative and qualitative evidence, to reveal the story behind the numbers. It enables evaluative judgements that are clear, direct, and responsive to donors’ core questions - grounded in reliable evidence and transparent reasoning. Crucially, this approach shares power with stakeholders by surfacing and incorporating their values, ensuring that judgements are contextually grounded and fair.
In Part 2 we will develop a framework for the 1000 Days Fund to demonstrate how it works. Stay tuned.
Update: Part 2 is now live! Here it is:
Thanks for reading!
Zack Petersen is the Founder, former CEO and now Chief Strategist at the 1000 Days Fund. Julian King is an independent public policy consultant, specialising in evaluation and Value for Investment capability building. While Julian has donated to the 1000 Days Fund (you can too), he has no other relationship with the organisation.
This post was written on a voluntary basis and reflects the views of the authors alone; it does not represent the views of any other individuals or organisations they work with.
The authors thank Fred Carden for helpful peer review. We take responsibility for any errors or omissions.
Visit the 1000 Days Fund website here
Visit Julian’s website here
This critique focuses on philanthropic and private donors. While all funding relationships create accountability demands, major government and multilateral donors have established standards that emphasise learning, rigour, and capacity-building, rather than simple headline numbers. Problems described here are most acute in the philanthropic sector, where evaluation expectations are often less formalised.
SROI is an evaluation method that identifies, describes, and monetises social, environmental, and economic outcomes and costs to estimate the overall value created per dollar invested, typically expressed as a benefit-cost ratio. I have unpacked SROI and similar methods in other posts.




