Back to Collections

Need practical how-to info that aims to help you build your evaluation capacity? This collection includes suggested readings from our friends at BetterEvaluation, the Center for Evaluation Innovation, the Center for Effective Philanthropy, and Grantmakers for Effective Organizations as well as hand-picked content by Candid. Thousands of actual evaluations are  available for download.

More ways to engage:
- Add your organization's content to this collection.
- Easily share this collection on your website or app.

Search this collection

Clear all

9 results found

reorder grid_view

Types of Evaluation: Which is Right for You?

November 19, 2015

What we talk about when we talk about impactVery often, we hear the words "evaluation" and "impact" used interchangeably. Impact evaluation is a type of evaluation, but it is not the only one. Impact evaluation looks to determine the changes that can be directly attributable to a program or intervention. And as we all know, in the complicated landscape of the kinds of social change work that we are typically looking to evaluate, it is very difficult – if not impossible -- to attribute behavioral, attitudinal, or other outcomes directly to a particular program.What follows is an overview of evaluation models that are frequently referenced in evaluation literature. This list is not meant to be exhaustive. Rather, we hope it will offer a starting point to think about the different approaches you can take to evaluate your program, strategy, or intervention. This list is adapted from various sources, which are referenced at the end of this post.Evaluation approachesFormative Evaluation or Needs Assessment EvaluationWhen you might use it• During development of a new programWhat it can show• Identifies areas for improvementWhy it can be useful• Allows program to be modified before full implementation beginsSummative Evaluation or Outcomes EvaluationWhen you might use it• After program implementation has begun• At pre-determined intervals of an existing program • At the conclusion of a program What it can show• Degree to which program is having effect on knowledge, attitudes, or behaviors of target populationWhy it can be useful• Effectiveness of program against its stated objectives (at particular milestones)Process / Monitoring EvaluationWhen you might use it• When program implementation begins• During operation of existing programWhat it can show• Extent to which program is being implemented as designedWhy it can be useful• Provides early warning if things are not progressing as planned• Distinguishes program design (theory of change, logic model) from implementationDevelopmental EvaluationWhen you might use it• During implementation of a particularly complex or innovative program• In conditions of high uncertaintyWhat it can show• Emergence – patterns that emerge from interactions from groups of participants• Dynamic adaptations – extent to which program is affected by interactions between and among participantsWhy it can be useful• Can incorporate "nontraditional" concepts such as non-linearity, uncertainty, rapid cycling, vision-driven (rather that metrics-driven)Empowerment EvaluationWhen you might use it• To support a community in building evaluation capacityWhat it can show• Community knowledge and assetsWhy it can be useful• Is designed for inclusion, participation, increased capacity, and community ownership

Evaluation Vs Research: Understanding the Why and How

October 21, 2015

The latest news and occasional commentary about what's happening at the Foundation and around our great state. 

Demystifying Evaluation

September 1, 2015

Evaluation is one of those terms that can get a bad rap. Often used interchangeably with "accountability," "measurement," "assessment," and "outcomes," the basic reasons why one might undertake evaluative activities can easily become subsumed under a shroud of potential blame or a sense of failure. Used well, and used thoughtfully, evaluation activities are simply tools to help you better understand your own work and to help arm you with information to make better, more-informed decisions. 

Getting The Most Out of Evaluation

March 16, 2015

As a longtime funder of evidence-based programs and rigorous evaluations, the Edna McConnell Clark Foundation (EMCF) has learned from experience how difficult it can be to build an evidence base and extract the maximum benefit from evaluation. All of EMCF's 19 current grantees are undergoing or have completed rigorous evaluation.As times, needs, funding, demographics, implementation and implementers inevitably change, a program must be assessed regularly to ensure it keeps up with these changes and adapts to them. And continual evaluation is the best way to ensure continual innovation, for how else can an organization that tries to do something new or differently determine whether its innovation is successful and discover ways to refine and extend it?

Guest Post: Systems Mapping and Evaluation

December 2, 2014

At the Center for Evaluation Innovation, our mission is to push the evaluation field forward in new directions. This often means we're doing things we've never done before, like using systems mapping in our evaluation work. In an earlier blog post, Daniel introduced the systems map for the Madison Initiative and the rationale for creating it. Now that the first draft is public (and open for comments!), we have some early thoughts about using systems mapping as an evaluation tool.First, to set the context: We are conducting a developmental evaluation of the Madison Initiative's initial phase of experimentation, learning, and field building. Our role is to be a "critical friend" to the strategy team, asking tough evaluative questions, uncovering assumptions, and collecting and interpreting data that inform ongoing strategic decisions. This role deeply affects our choice of evaluation questions, tools, and methods, starting with our choice to use a systems map rather than a theory of change to guide our evaluation work.

How Collaborating on Impact Evaluation Helps Ecosystems

August 27, 2014

Thanks to the authors of Metrics 3.0 for putting together a clear and compelling framework to help guide social businesses in creating value through impact measurement. Each of the four strategies they articulate speaks to trends we see and principles we strive for at d.light in our own impact measurement strategy.d.light is one of the "small and growing businesses" (SGB) the Metrics 3.0 framework addresses. Founded in 2006 as a for-profit social enterprise, d.light manufactures and distributes solar lighting and power products designed to serve the more than two billion people globally without access to reliable electricity. Grounded by a theory of change, we employ a three-pronged impact measurement strategy – modeling, monitoring, and evaluation – to triangulate conclusions and extract the most authentic insights into how solar energy affects households previously reliant on poor quality, expensive and unhealthy alternatives.We have made major strides in implementing the "Evaluate" half of the Metrics 3.0 framework, and have a number of impact evaluations underway and in the pipeline. We agree that there is a deep need for more ecosystem-level evaluation, as large-scale evaluations tend to be expensive and resource intensive.

The Power of Randomized Evaluation: Understanding Issues, Adapting Solutions

August 17, 2014

In international development there is a tension between the drive to "scale what works" and the fundamental reality that the world is complex, and solutions discovered in one place often can't be easily transported to different contexts.At Innovations for Poverty Action, we use randomized controlled trials to measure which solutions to poverty work and why. We believe that this methodology can help to alleviate poverty, and yet we don't advocate focusing solely on programs that are "proven" to work in this way. One risk of funding only "proven" and therefore "provable" interventions – the "moneyball of philanthropy" – is that interventions proven to work in one place could be transposed to new situations without attention to context, which could be a disaster. Also, interventions that cannot easily be subjected to rigorous evaluation may not get funding.The risk of the other extreme – focusing only on the details of a complex local environment – is that we may fail to uncover important lessons or innovative ideas that can improve the lives of millions in other places. Focusing only on the complexity and uniqueness of each situation means never being able to use prior knowledge, dooming one to constantly reinvent the wheel.

Friday Note: What's the One Really Good Reason Not to Evaluate?

August 8, 2014

Here's the set of questions that together can help us all figure out if an evaluation might make a difference:What are the decisions that the findings from the evaluation could inform? Are those decisions going to be based on evidence about program effectiveness?When are those decisions going to be made?Can the evaluation change anyone's mind?If these questions were applied systematically and early in program design and implementation, we'd have more good and useful evaluations -- ones that are well-timed and use appropriate methods. We'd have better clarity about the purpose of the evaluations we conduct. The timing and methods would match the needs of the decision makers, and greater transparency could mitigate against political influences. At the same time, we'd end up with fewer evaluations that are purely symbolic.

Intensity of Integration Tracking Form

March 1, 1996

This is a tool that measures strengthened alliances. This tracking form helps organizations capture how integrated their partnerships and alliances are ranging from information sharing and communication to formal consolidation and integration.