Need practical how-to info that aims to help you build your evaluation capacity? This collection includes suggested readings from our friends at BetterEvaluation, the Center for Evaluation Innovation, the Center for Effective Philanthropy, and Grantmakers for Effective Organizations as well as hand-picked content by Candid. Thousands of actual evaluations are  available for download.

More ways to engage:
- Add your organization's content to this collection.
- Send us content recommendations.
- Easily share this collection on your website or app.

Search this collection

Clear all

30 results found

reorder grid_view

The Step-by-Step Guide to Evaluation: How to Become Savvy Evaluation Consumers

November 1, 2017

A new Step-by-Step Guide to Evaluation released in November 2017 for grantees, nonprofits and community leaders is a successor for the original Evaluation Handbook that was published in 1998 and revised in 2014. The new guide is available here by clicking Download PDF. The original handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have direct responsibility for the ongoing evaluation of W.K. Kellogg Foundation-funded projects.Increasingly, we have targeted our grantmaking by funding groups of projects that address issues of particular importance to the Foundation.The primary purpose for grouping similar projects together in "clusters" is to bring about more policy or systemic change than would be possible in a single project or in a series of unrelated projects. Cluster evaluation is a means of determining how well the collection of projects fulfills the objective of systemic change. Projects identified as part of a cluster are periodically brought together at networking conferences to discuss issues of interest to project directors, cluster evaluators,and the Foundation.

Evaluation in Foundations; Guidelines and Best Practices

Evaluation in Philanthropy: Perspectives From the Field

April 3, 2017

This publication offers a brief overview of how grantmakers are looking at evaluation through an organizational learning and effectiveness lens. It is based on a review of the current literature on evaluation and learning, outreach to grantmakers that have made these activities a priority and the work of GEO and the Council to raise this issue more prominently among their memberships. Many of these grantmakers are testing new approaches to gathering and sharing information about their work and the work of their grantees. We share the learning and evaluation stories of 19 GEO members in the pages that follow.

Evaluation in Foundations; Must-Reads

Situating the Next Generation of Impact Measurement and Evaluation for Impact Investing

December 1, 2016

In taking stock of the landscape, this paper promotes a convergence of methods, building from both the impact investment and evaluation fields.The commitment of impact investors to strengthen the process of generating evidence for their social returns alongside the evidence for financial returns is a veritable game changer. But social change is a complex business and good intentions do not necessarily translate into verifiable impact.As the public sector, bilaterals, and multilaterals increasingly partner with impact investors in achieving collective impact goals, the need for strong evidence about impact becomes even more compelling. The time has come to develop new mindsets and approaches that can be widely shared and employed in ways that will advance the frontier for impact measurement and evaluation of impact investing. Each of the menu options presented in this paper can contribute to building evidence about impact. The next generation of measurement will be stronger if the full range of options comes into play and the more evaluative approaches become commonplace as means for developing evidence and testing assumptions about the processes of change from a stakeholder perspective– with a view toward context and systems.Creating and sharing evidence about impact is a key lever for contributing to greater impact, demonstrating additionality, and for building confidence among potential investors, partners and observers in this emergent industry on its path to maturation. Further, the range of measurement options offers opportunities to choose appropriate approaches that will allow data to contribute to impact management– to improve on the business model of ventures and to improve services and systems that improve conditions for people and households living in poverty. 

Evaluation in Foundations; Tools and Frameworks

Developmental Evaluation in Practice: Lessons from Evaluating a Market-Based Employment Initiative

September 26, 2016

Developmental evaluation (DE) has emerged as an approach that is well suited to evaluating innovative early-stage or market-based initiatives that address complex social issues. However, because DE theory and practice are still evolving, there are relatively few examples of its implementation on the ground. This paper reviews the practical experience of a monitoring and evaluation (M&E) team in conducting a developmental evaluation of a Rockefeller Foundation initiative in the field of digital employment for young people, and offers observations and advice on applying developmental evaluation in practice.Through its work with The Rockefeller Foundation's team and its grantees, the M&E team drew lessons relating to context, intentional learning, tools and processes, trust and communication, and adaption associated with developmental evaluation. It was found that success depends on commissioning a highly qualified DE team with interpersonal and communication skills and, whenever possible, some sectoral knowledge. The paper also offers responses to three major criticisms frequently leveled against developmental evaluation, namely that it displaces other types of evaluations, is too focused on "soft" methods and indicators, and downplays accountability.Through its reporting of lessons learned and its response to the challenges and shortcomings of developmental evaluation, the M&E team makes the case for including developmental evaluation as a tool for the evaluation toolbox, recommending that it be employed across a wide range of geographies and sectors. With its recommendation, it calls for future undertakings to experiment with new combinations of methods within the DE framework to strengthen its causal, quantitative, and accountability dimensions.

Evaluation in Foundations; Tools and Frameworks

Benchmarking Foundation Evaluation Practices

September 1, 2016

For foundations, there are lots of questions to reflect on when thinking about which evaluation practices best align with their strategy, culture, and mission. How much should a foundation invest in evaluation? What can they do to ensure that the information they receive from evaluation is useful to them? With whom should they share what they have learned?Considering these numerous questions in light of benchmarking data about what other foundations are doing can be informative and important.Developed in partnership with the Center for Evaluation Innovation (CEI), Benchmarking Foundation Evaluation Practices is the most comprehensive data collection effort to date on evaluation practices at foundations. The report shares data points and infographics on crucial topics related to evaluation at foundations, such as evaluation staffing and structures, investment in evaluation work, and the usefulness of evaluation information.Findings in the report are based on survey responses from individuals who were either the most senior evaluation or program staff at foundations in the U.S. and Canada giving at least $10 million annually, or members of the Evaluation Roundtable, a network of foundation leaders in evaluation convened by CEI.

Evaluation in Foundations; Must-Reads

Planning for Monitoring, Learning, and Evaluation at Small- to Medium-Sized Foundations

July 1, 2016

This report is based on findings from desktop research and interviews with selected foundations conducted between April and June 2016. It was developed to give the Oak Foundation a sense of how other foundations are tackling monitoring, evaluation, and learning (MEL) questions, and to show a range of options for Oak to consider as it develops its own MEL Plan. This summary of findings was developed for public distribution, anticipating that it may be useful for other donors.Key trends that emerged from the interviews and desktop research included the following:1. Foundations are spending more resources and putting more staff time into evaluation than they did in the past. Staff at smaller foundations tend to spend more time on individual grant evaluations, while staff at larger foundations tend to spend more time on assessments of broad program areas and on learning processes. While many foundations do not have consistent systems for tracking evaluation spending, some are deciding it would be useful to capture that information more methodically.2. Less attention has been put on learning to-date, but recognition of the importance of purposeful learning is growing quickly. Many foundations are hoping to improve upon their learning processes, but finding that it is not easy. It often requires an internal cultural shift and testing a variety of approaches. In contrast, foundations tend to have fairly clear processes and standards for monitoring and evaluation. Foundations that do have explicit learning efforts remain more focused on internal learning rather than communicating and sharing lessons externally. Foundations tend to be more transparent with external audiences about their grant-making processes, goals, and strategies, and less transparent about how they assess performance or their lessons learned. That said, both grantees and foundations are recognizing that sharing more lessons externally would be beneficial.3. Foundations are exploring appropriate and useful ways to evaluate work done through sub-granting organizations. Some are focusing on building the internal monitoring and evaluation capacity of those organizations. It would be useful for donors to coordinate approaches to evaluate work done through sub-granting organizations, which can allow for pooled resources and avoid putting an extra burden on the subgrantor.

Evaluation in Foundations

Kauffman Foundation Grant Application Guide: Developing Expected Outputs and Outcomes

January 1, 2016

As part of your grant application you will need to construct a series of "output" and "outcome" performance metrics that describe what success will look like at the end of the grant period and how it will be measured. This brief guide is intended to help you build rigorous and specific metrics that will become the basis of future performance reporting to the Foundation.

Evaluation in Foundations; Guidelines and Best Practices

Panorama des fondations et des fonds de dotation créés par des entreprises mécènes - 2016

January 1, 2016

This 2nd edition of the Panorama pursues the analysis of the dynamics in the work led in 2014 and that of the evolution of the sector regarding evaluation, communication and strategic orientations, by being interested this time more particularly in the theme of the 'general interest'.

Evaluation in Foundations

Gordon and Betty Moore Foundation M&E Landscape

October 1, 2015

To understand best and next practice in measurement and evaluation (M&E) in philanthropy, we interviewed and conducted research on more than 40 social sector and governmental organizations. We talked with M&E leaders at foundations and other organizations considered to be highperforming in M&E, as well as with field specialists with experience across a number of organizations or deep expertise in relevant areas. Our research focused on four central categories of M&E design: structure, staff, methodology and processes.Through our research, we uncovered several design characteristics that seem common to high-performing M&E units. These include: an M&E leader with high positional authority and broad expertise, methodological diversity, a focus on learning, and an evaluation focus beyond individual grants.We also found a number of design characteristics for which there is no one-size best-in-class M&E design. Instead, the aim should be to design an M&E unit that is the right fit for the organization in terms of purpose (function) and in keeping with organizational structure and culture. Therefore, to determine the best design for M&E within an organization, it is critical for that organization to be clear on its measurement purpose and to be clear-eyed on its culture.

Evaluation in Foundations; Guidelines and Best Practices

Learning Together: Actionable Approaches for Grantmakers

May 20, 2015

A majority of grantmakers are struggling to make evaluation and learning meaningful to anyone outside their organizations. Not only is evaluation conducted primarily for internal purposes, but it is usually done by the grantmaker entirely on its own -- with no outside learning partners except perhaps an external evaluator -- and provides little value and may even be burdensome to the grantee. It may be that some funders do not consider expanding the scope of learning efforts beyond their own walls. Or perhaps the gap is driven by funding constraints or funder -- grantee dynamics. In any case, grantees and other stakeholders are critical partners in the achievement of grantmakers' missions and are therefore critical learning partners as well.In this publication, GEO offers actionable ideas and practices to help grantmakers make learning with others a priority. The publication includes stories about foundations that are learning together with a variety of partners, plus a discussion of the key questions that can help shape successful shared learning. It is based on research and interviews conducted from late 2013 to 2015, including extensive outreach to grantmakers, evaluation practitioners and others. The focus of GEO's inquiry: documenting the challenges facing grantmakers as they set out to learn with others, lifting up what it takes to do this work successfully and identifying grantmakers that show a commitment to learning together.

Evaluation in Foundations; Guidelines and Best Practices; Must-Reads

Kauffman Foundation Evaluation Guide

March 1, 2015

In March 2015, the Ewing Marion Kauffman Foundation (EMKF) created a new department tasked with evaluating the Foundation's impact— through both its grant-making and the programs it operates directly. The purpose of this document is to describe and explain the Foundation's policies for measurement and evaluation, which have been drawn from the best practices of others in this field and tailored to the needs of EMKF.Our goal is to clearly describe—for associates, grantees, and the community—the purpose, values, and tools used in evaluation at EMKF.Our evaluation work is guided by six key values:1. Meaningful—we focus on collecting and analyzing data that directly informs our strategies and goals. Evaluations should include useful and actionable results that assist board, staff, and grantees in decision-making.2. Credible—evaluations are designed to meet the highest evidentiary standard possible, while recognizing that different approaches are necessary depending on the context. In addition, evaluation materials seek to be as rigorous and accurate as possible, while also noting important limitations and assumptions.3. Timely—we design and conduct evaluations with the goal of providing results quickly enough to be relevant for informing strategic decisions.4. Collaborative—all evaluations should be designed with input from multiple stakeholders, including the board, senior leadership, program staff, grantees, and external advisors as appropriate for a particular project.5. Flexible—the Foundation and its grantees work in areas that are complex and highly fluid, requiring evaluation approaches that are customized to meet the needs and context of each area. In addition, evaluation approaches should be designed to allow for modifications resulting from mid-course corrections or other changes that arise over time.6. Transparent—we strive to share the results and lessons learned from our grantmaking with the broader community (other funders, non-profit organizations, policymakers, influencers, and the public) so that they may gain from our experiences.

Evaluation in Foundations; Guidelines and Best Practices

Funder Perspectives: Assessing Media Investments

January 23, 2015

How are funders evaluating the outcomes of the media productions and campaigns that they support? Over the past five years, this question has informed a growing array of convenings, reports and research initiatives within the philanthropic sector, driving the emergence of a small but increasingly visible field of analysts and producers seeking to both quantify and qualify the impact of public interest media.These examinations have stimulated debate among both funders and grantees. Calls for the creation of a single media impact metric or tool have been met with both curiosity and skepticism. Those in favor of impact analysis cite its strategic usefulness in this moment of myriad new and untested media platforms, the importance of concretely tying mission to outcomes, and the need to justify media investments rather than programmatic ones. Detractors raise concerns about how an excess of evaluation might stifle creativity, needlessly limit funding to those projects whose short-term impact can be conclusively proven, or simply bog grantees down in administrative tasks that require entirely different skills, as well as resources.However, these debates have taken place in somewhat of an information vacuum. To date, the conversation about media impact has been led by a limited group of foundations. Little substantive information is available about how a broader range of funders address questions of evaluation. This research project aims to help fill that gap.The report, Funder Perspectives: Assessing Media Investments explores the multiple and sometimes overlapping lenses through which grantmakers view media evaluation, and confirms that there are still many unanswered questions.

Evaluation in Foundations