Need practical how-to info that aims to help you build your evaluation capacity? This collection includes suggested readings from our friends at BetterEvaluation, the Center for Evaluation Innovation, the Center for Effective Philanthropy, and Grantmakers for Effective Organizations as well as hand-picked content by Candid. Thousands of actual evaluations are  available for download.

Know of content that should be considered for this collection? Please suggest a report!

"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0

Search this collection

Clear all

26 results found

reorder grid_view

Five Year Reflection: No Moat Philanthropy - Reflections on five years of working to make the Bush Foundation more permeable

October 1, 2017

The Bush Foundation developed five key learning principles from a No-Moat philanthropy perspective. Since moats and barricades isolate, foundations should be intentional about making an impact through their work. By building relationships, inspiring action, spreading optimism, foundations increase the positive they have in their regions.

Pathways to Progress: A Tangible Impact on Youth Economic Opportunity

June 1, 2017

The inaugural Issue Brief, Pathways to Progress: Setting the Stage for Impact (June 2015), described the Citi Foundation's goals in each of these impact areas and early progress. The second Issue Brief, Pathways to Progress: The Portfolio and the Field of Youth Economic Opportunity (April 2016), focused on impact in the field; including an overview of trends in the youth economic opportunity field, and how the Pathways to Progress grantees are responding to and contributing to these trends. The third Issue Brief, Pathways to Progress: Forging Strategies to Broaden Impact (November 2016), focused on organizational and programmatic impacts including scaling and program adaptation.This Issue Brief is the fourth and final in the Pathways to Progress series. In this Brief, we focus on the impact of the five flagship Pathways to Progress grantees on the youth they have served, and provide a retrospective look at the progress and select lessons from the first three years of the investment.

Evaluation Framework: Neighborhood Health Status Improvement

March 1, 2017

Neighborhood Health Status Improvement- Launched in 2008 · Asset based, resident driven, locally focused- Emphasis on improving the physical, social, and economic environments of neighborhoods

Tools and Frameworks

Matching Evaluation Approaches to Expectations

February 7, 2017

In the nonprofit sector, evaluation is a word that gets used a lot. Different kinds of data gathering approaches with different purposes sometimes get lumped together under the general heading of evaluation. This can lead to miscommunication and unrealistic expectations. To try to clear things up a bit, we have created this resource.

What Is a Tested, Effective Program?

November 21, 2016

What Is a Tested, Effective Program? As the name implies, tested, effective programs have undergone rigorous testing; their effectiveness is demonstrated by a convincing set of persuasive evidence. This evidence refers to four categories of information about the program:- Impact: How much positive change in developmental outcomes comes from the program?- Intervention specificity: Are the target population, the outcome and risk and protective factors to be addressed clearly stated? Does the theory of change or logic model explain how the change comes about?- Evaluation quality: Can we be confident in the program's evaluation?- System readiness: Is the program ready for implementation at scale?

Developmental Evaluation in Practice: Lessons from Evaluating a Market-Based Employment Initiative

September 26, 2016

Developmental evaluation (DE) has emerged as an approach that is well suited to evaluating innovative early-stage or market-based initiatives that address complex social issues. However, because DE theory and practice are still evolving, there are relatively few examples of its implementation on the ground. This paper reviews the practical experience of a monitoring and evaluation (M&E) team in conducting a developmental evaluation of a Rockefeller Foundation initiative in the field of digital employment for young people, and offers observations and advice on applying developmental evaluation in practice.Through its work with The Rockefeller Foundation's team and its grantees, the M&E team drew lessons relating to context, intentional learning, tools and processes, trust and communication, and adaption associated with developmental evaluation. It was found that success depends on commissioning a highly qualified DE team with interpersonal and communication skills and, whenever possible, some sectoral knowledge. The paper also offers responses to three major criticisms frequently leveled against developmental evaluation, namely that it displaces other types of evaluations, is too focused on "soft" methods and indicators, and downplays accountability.Through its reporting of lessons learned and its response to the challenges and shortcomings of developmental evaluation, the M&E team makes the case for including developmental evaluation as a tool for the evaluation toolbox, recommending that it be employed across a wide range of geographies and sectors. With its recommendation, it calls for future undertakings to experiment with new combinations of methods within the DE framework to strengthen its causal, quantitative, and accountability dimensions.

Evaluation in Foundations; Tools and Frameworks

Beyond the Numbers

January 12, 2016

In discussing EVALUATION AND IMPACT, it is easy to get caught up in a numbers game. There is tremendous pressure to report on how our work is making a difference, and scale ("Billions and billions served!") seems like the most expedient way to demonstrate why our organizations and our programs matter. When the goal is simple volume – for example, how many people can we move through this drive-in window? – the metrics can indeed be straightforward. But if the goal is changing social behaviors – how can we get students to stay in school, or how can we break the cycle of poverty, or how can we improve health outcomes? – the numbers can tell us a great deal, but rarely can they tell a complete story.

Evaluating and Supporting Principals

January 1, 2016

This report analyzes the progress of these districts in implementing the fourth key component, evaluation and support systems aligned with the district-adopted standards for leaders. Consistent with the initiative's philosophy that evaluations can be a positive source of guidance for improving practice, districts have agreed to provide novice principals with support tailored to their needs, as identified by evaluations. The ultimate goal of this support—which includes support from supervisors, coaching or mentoring, and professional development—is to strengthen principals' capacity to improve teaching and learning.

Guidelines and Best Practices

Types of Evaluation: Which is Right for You?

November 19, 2015

What we talk about when we talk about impactVery often, we hear the words "evaluation" and "impact" used interchangeably. Impact evaluation is a type of evaluation, but it is not the only one. Impact evaluation looks to determine the changes that can be directly attributable to a program or intervention. And as we all know, in the complicated landscape of the kinds of social change work that we are typically looking to evaluate, it is very difficult – if not impossible -- to attribute behavioral, attitudinal, or other outcomes directly to a particular program.What follows is an overview of evaluation models that are frequently referenced in evaluation literature. This list is not meant to be exhaustive. Rather, we hope it will offer a starting point to think about the different approaches you can take to evaluate your program, strategy, or intervention. This list is adapted from various sources, which are referenced at the end of this post.Evaluation approachesFormative Evaluation or Needs Assessment EvaluationWhen you might use it• During development of a new programWhat it can show• Identifies areas for improvementWhy it can be useful• Allows program to be modified before full implementation beginsSummative Evaluation or Outcomes EvaluationWhen you might use it• After program implementation has begun• At pre-determined intervals of an existing program • At the conclusion of a program What it can show• Degree to which program is having effect on knowledge, attitudes, or behaviors of target populationWhy it can be useful• Effectiveness of program against its stated objectives (at particular milestones)Process / Monitoring EvaluationWhen you might use it• When program implementation begins• During operation of existing programWhat it can show• Extent to which program is being implemented as designedWhy it can be useful• Provides early warning if things are not progressing as planned• Distinguishes program design (theory of change, logic model) from implementationDevelopmental EvaluationWhen you might use it• During implementation of a particularly complex or innovative program• In conditions of high uncertaintyWhat it can show• Emergence – patterns that emerge from interactions from groups of participants• Dynamic adaptations – extent to which program is affected by interactions between and among participantsWhy it can be useful• Can incorporate "nontraditional" concepts such as non-linearity, uncertainty, rapid cycling, vision-driven (rather that metrics-driven)Empowerment EvaluationWhen you might use it• To support a community in building evaluation capacityWhat it can show• Community knowledge and assetsWhy it can be useful• Is designed for inclusion, participation, increased capacity, and community ownership

Evaluation Vs Research: Understanding the Why and How

October 21, 2015

The latest news and occasional commentary about what's happening at the Foundation and around our great state. 

Demystifying Evaluation

September 1, 2015

Evaluation is one of those terms that can get a bad rap. Often used interchangeably with "accountability," "measurement," "assessment," and "outcomes," the basic reasons why one might undertake evaluative activities can easily become subsumed under a shroud of potential blame or a sense of failure. Used well, and used thoughtfully, evaluation activities are simply tools to help you better understand your own work and to help arm you with information to make better, more-informed decisions. 

Getting The Most Out of Evaluation

March 16, 2015

As a longtime funder of evidence-based programs and rigorous evaluations, the Edna McConnell Clark Foundation (EMCF) has learned from experience how difficult it can be to build an evidence base and extract the maximum benefit from evaluation. All of EMCF's 19 current grantees are undergoing or have completed rigorous evaluation.As times, needs, funding, demographics, implementation and implementers inevitably change, a program must be assessed regularly to ensure it keeps up with these changes and adapts to them. And continual evaluation is the best way to ensure continual innovation, for how else can an organization that tries to do something new or differently determine whether its innovation is successful and discover ways to refine and extend it?