Need practical how-to info that aims to help you build your evaluation capacity? This collection includes suggested readings from our friends at BetterEvaluation, the Center for Evaluation Innovation, the Center for Effective Philanthropy, and Grantmakers for Effective Organizations as well as hand-picked content by Candid. Thousands of actual evaluations are  available for download.

More ways to engage:
- Add your organization's content to this collection.
- Easily share this collection on your website or app.

Search this collection

Clear all

16 results found

reorder grid_view

GreatNonprofits Reviews

May 2, 2017

GreatNonprofits offers a tool to find, review, and discuss nonprofits. Much like Amazon book reviews or other consumer review sites (Epinions, Zagats, TripAdvisor, Yelp, etc.), the reviews and ratings are posted by people who have interacted with a nonprofit in some capacity and want to share their opinions about it.

Ecological Footprint

May 1, 2017

The Ecological Footprint is a resource accounting tool that measures how much of the biological capacity of the planet is demanded by a given human activity or population. It takes into account six primary areas: cropland, grazing land, fishing grounds, forest, built-up land, and land for carbon absorption. The Ecological Footprint calculator, an application of the Footprint tool, asks a user a series of questions to determine how many global hectares are required for the entity's support - these questions include eating habits, household size, transportation usage, and others.

Beyond the Numbers

January 12, 2016

In discussing EVALUATION AND IMPACT, it is easy to get caught up in a numbers game. There is tremendous pressure to report on how our work is making a difference, and scale ("Billions and billions served!") seems like the most expedient way to demonstrate why our organizations and our programs matter. When the goal is simple volume – for example, how many people can we move through this drive-in window? – the metrics can indeed be straightforward. But if the goal is changing social behaviors – how can we get students to stay in school, or how can we break the cycle of poverty, or how can we improve health outcomes? – the numbers can tell us a great deal, but rarely can they tell a complete story.

Types of Evaluation: Which is Right for You?

November 19, 2015

What we talk about when we talk about impactVery often, we hear the words "evaluation" and "impact" used interchangeably. Impact evaluation is a type of evaluation, but it is not the only one. Impact evaluation looks to determine the changes that can be directly attributable to a program or intervention. And as we all know, in the complicated landscape of the kinds of social change work that we are typically looking to evaluate, it is very difficult – if not impossible -- to attribute behavioral, attitudinal, or other outcomes directly to a particular program.What follows is an overview of evaluation models that are frequently referenced in evaluation literature. This list is not meant to be exhaustive. Rather, we hope it will offer a starting point to think about the different approaches you can take to evaluate your program, strategy, or intervention. This list is adapted from various sources, which are referenced at the end of this post.Evaluation approachesFormative Evaluation or Needs Assessment EvaluationWhen you might use it• During development of a new programWhat it can show• Identifies areas for improvementWhy it can be useful• Allows program to be modified before full implementation beginsSummative Evaluation or Outcomes EvaluationWhen you might use it• After program implementation has begun• At pre-determined intervals of an existing program • At the conclusion of a program What it can show• Degree to which program is having effect on knowledge, attitudes, or behaviors of target populationWhy it can be useful• Effectiveness of program against its stated objectives (at particular milestones)Process / Monitoring EvaluationWhen you might use it• When program implementation begins• During operation of existing programWhat it can show• Extent to which program is being implemented as designedWhy it can be useful• Provides early warning if things are not progressing as planned• Distinguishes program design (theory of change, logic model) from implementationDevelopmental EvaluationWhen you might use it• During implementation of a particularly complex or innovative program• In conditions of high uncertaintyWhat it can show• Emergence – patterns that emerge from interactions from groups of participants• Dynamic adaptations – extent to which program is affected by interactions between and among participantsWhy it can be useful• Can incorporate "nontraditional" concepts such as non-linearity, uncertainty, rapid cycling, vision-driven (rather that metrics-driven)Empowerment EvaluationWhen you might use it• To support a community in building evaluation capacityWhat it can show• Community knowledge and assetsWhy it can be useful• Is designed for inclusion, participation, increased capacity, and community ownership

Evaluation Vs Research: Understanding the Why and How

October 21, 2015

The latest news and occasional commentary about what's happening at the Foundation and around our great state. 

Demystifying Evaluation

September 1, 2015

Evaluation is one of those terms that can get a bad rap. Often used interchangeably with "accountability," "measurement," "assessment," and "outcomes," the basic reasons why one might undertake evaluative activities can easily become subsumed under a shroud of potential blame or a sense of failure. Used well, and used thoughtfully, evaluation activities are simply tools to help you better understand your own work and to help arm you with information to make better, more-informed decisions. 

Getting The Most Out of Evaluation

March 16, 2015

As a longtime funder of evidence-based programs and rigorous evaluations, the Edna McConnell Clark Foundation (EMCF) has learned from experience how difficult it can be to build an evidence base and extract the maximum benefit from evaluation. All of EMCF's 19 current grantees are undergoing or have completed rigorous evaluation.As times, needs, funding, demographics, implementation and implementers inevitably change, a program must be assessed regularly to ensure it keeps up with these changes and adapts to them. And continual evaluation is the best way to ensure continual innovation, for how else can an organization that tries to do something new or differently determine whether its innovation is successful and discover ways to refine and extend it?

Evaluating a Nonprofit

February 2, 2015

Nonprofit Evaluation ProcessThe truth is that there are many factors that contribute to how to evaluate the strength of a nonprofit, and depending on the donor's intent or the focus of your giving, you can weigh any number of factors when deciding how to allocate your gifts, volunteer your time or engage more deeply with an organization.

Prioritize, Capitalize, Right Size and More: 5 Insights from the Skoll Foundation on Monitoring and Evaluation

January 1, 2015

Ehren Reed, the Skoll Foundation's Director of Evaluation, was recently asked what matters when he is looking at the measurements social entrepreneurial organizations use."Organizations have the power to achieve the change donors are looking to make," he says. Here are some of his "izes," as he calls them:Contextualize. It's helpful when I can see clearly how the work is contextualized within an organization's efforts. Whether it's Kevin Starr's Eight – Word Mission Statement or a theory of change, there needs to be clear description of their goals, and the actions they are doing to lead to that. I want to know how those metrics you're sharing connect with that core strategy.Prioritize. There are a ton of things that you could be measuring. The fact that you have gone through an exercise to winnow it down to meaningful measures is a good sign. Those measures should be influenced by what you are able to do with that information. If you are collecting something you are not making use of, you are wasting time and money.Capitalize. Don't answer a question that you need not answer. There are certain outcomes and indicators that are critical to your work, and more attributable to your efforts. Concentrate on those. There are others that you can say, 'We made a contribution to those.' Leave those alone. For example: Citizen Schools increases graduation rates of students who attend their program by 20 percent over a control; that's the compelling story. I don't need to know whether that leads to greater income generation after high school graduation; there are studies that already show me that. Be efficient with the way you are spending your dollars.Right Size. Not everyone in the organization needs to look at all the same data. At One Acre Fund, workers in the field pay attention to which farmers are attending trainings, what types of uptake are they having with particular techniques they are being taught, and types of repayment rates. That's the type of information they need to know to see if they are doing their job effectively. Middle managers look at aggregated data. Leadership looks at only a key set of performance indicators. So right size your approach accordingly.Systematize. The idea that we see M and E as a separate report gives me pause; it's a dangerous misnomer. It needs to be part and parcel of your programmatic activity. If it's all focused on a report which comes out once a year, and there is not a lot behind the scenes leading up to that report, that gives me pause. An example: Your car dashboard metrics allow you to know if your car is functioning effectively. You look at the dashboard every day. It's only when you get to the selling of the car that you say, 'It gets a lot of miles per gallon,' or 'It's been in two minor accidents.'

The Power of Randomized Evaluation: Understanding Issues, Adapting Solutions

August 17, 2014

In international development there is a tension between the drive to "scale what works" and the fundamental reality that the world is complex, and solutions discovered in one place often can't be easily transported to different contexts.At Innovations for Poverty Action, we use randomized controlled trials to measure which solutions to poverty work and why. We believe that this methodology can help to alleviate poverty, and yet we don't advocate focusing solely on programs that are "proven" to work in this way. One risk of funding only "proven" and therefore "provable" interventions – the "moneyball of philanthropy" – is that interventions proven to work in one place could be transposed to new situations without attention to context, which could be a disaster. Also, interventions that cannot easily be subjected to rigorous evaluation may not get funding.The risk of the other extreme – focusing only on the details of a complex local environment – is that we may fail to uncover important lessons or innovative ideas that can improve the lives of millions in other places. Focusing only on the complexity and uniqueness of each situation means never being able to use prior knowledge, dooming one to constantly reinvent the wheel.

Friday Note: What's the One Really Good Reason Not to Evaluate?

August 8, 2014

Here's the set of questions that together can help us all figure out if an evaluation might make a difference:What are the decisions that the findings from the evaluation could inform? Are those decisions going to be based on evidence about program effectiveness?When are those decisions going to be made?Can the evaluation change anyone's mind?If these questions were applied systematically and early in program design and implementation, we'd have more good and useful evaluations -- ones that are well-timed and use appropriate methods. We'd have better clarity about the purpose of the evaluations we conduct. The timing and methods would match the needs of the decision makers, and greater transparency could mitigate against political influences. At the same time, we'd end up with fewer evaluations that are purely symbolic.

Organizational Grant Program Reporting + Invoicing Workshop

July 1, 2014

Year 1/Year 2 Report forms for Organzational Grant Program Reporting and Invoicing Workshop.Part I: Evaluating Your ProgressPart II: OGP Report QuestionsPart III: Online Reporting System