Need practical how-to info that aims to help you build your evaluation capacity? This collection includes suggested readings from our friends at BetterEvaluation, the Center for Evaluation Innovation, the Center for Effective Philanthropy, and Grantmakers for Effective Organizations as well as hand-picked content by Candid. Thousands of actual evaluations are  available for download.

Know of content that should be considered for this collection? Please suggest a report!

"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0

Search this collection

Clear all

12 results found

reorder grid_view

Understanding & Sharing What Works: The State of Foundation Practice

November 8, 2018

The Center for Effective Philanthropy (CEP) surveyed private and community foundation leaders regarding what they know about what is and isn't working in their foundations' efforts to achieve their goals. Drawing from 119 survey responses and in-depth interviews with 41 foundation CEOs, the report finds that while the majority of foundation CEOs believe they understand well what is working in their programmatic efforts, more than 40 percent believe their foundation is not investing enough time and money in developing that understanding.

Practical Innovations in Accountability: Comparative Constituency Feedback

May 1, 2017

This is a tool for obtaining feedback on a program's perception by various stakeholders (it can be applied at different points along the development value chain, between funders and grantees, and between organizations and their primary constituents). It uses a questionnaire to collect perceptions from organizations' constituents on key aspects of the organizations' performance. The questionnaire is administered simultaneously to a comparable constituency group for a cohort of similar organizations.

Guidelines and Best Practices

Benchmarking Foundation Evaluation Practices

September 1, 2016

For foundations, there are lots of questions to reflect on when thinking about which evaluation practices best align with their strategy, culture, and mission. How much should a foundation invest in evaluation? What can they do to ensure that the information they receive from evaluation is useful to them? With whom should they share what they have learned?Considering these numerous questions in light of benchmarking data about what other foundations are doing can be informative and important.Developed in partnership with the Center for Evaluation Innovation (CEI), Benchmarking Foundation Evaluation Practices is the most comprehensive data collection effort to date on evaluation practices at foundations. The report shares data points and infographics on crucial topics related to evaluation at foundations, such as evaluation staffing and structures, investment in evaluation work, and the usefulness of evaluation information.Findings in the report are based on survey responses from individuals who were either the most senior evaluation or program staff at foundations in the U.S. and Canada giving at least $10 million annually, or members of the Evaluation Roundtable, a network of foundation leaders in evaluation convened by CEI.

Evaluation in Foundations; Must-Reads

Funder Perspectives: Assessing Media Investments

January 23, 2015

How are funders evaluating the outcomes of the media productions and campaigns that they support? Over the past five years, this question has informed a growing array of convenings, reports and research initiatives within the philanthropic sector, driving the emergence of a small but increasingly visible field of analysts and producers seeking to both quantify and qualify the impact of public interest media.These examinations have stimulated debate among both funders and grantees. Calls for the creation of a single media impact metric or tool have been met with both curiosity and skepticism. Those in favor of impact analysis cite its strategic usefulness in this moment of myriad new and untested media platforms, the importance of concretely tying mission to outcomes, and the need to justify media investments rather than programmatic ones. Detractors raise concerns about how an excess of evaluation might stifle creativity, needlessly limit funding to those projects whose short-term impact can be conclusively proven, or simply bog grantees down in administrative tasks that require entirely different skills, as well as resources.However, these debates have taken place in somewhat of an information vacuum. To date, the conversation about media impact has been led by a limited group of foundations. Little substantive information is available about how a broader range of funders address questions of evaluation. This research project aims to help fill that gap.The report, Funder Perspectives: Assessing Media Investments explores the multiple and sometimes overlapping lenses through which grantmakers view media evaluation, and confirms that there are still many unanswered questions.

Evaluation in Foundations

Activity Packet: Tools, Tips, and Resources for Using Surveys

December 4, 2014

This activity packet provides tools, tips, and resources for arts organizations to use surveys in their assessments of 

Alliance for Justice Advocacy Capacity Tool (ACT)

May 1, 2012

This tool assesses the organizational capacity and specifically measures the advocacy capacity of a prospective or current grantee. The tool includes 18 indicators divided into four categories: advocacy goals, plans and strategies; conducting advocacy; advocacy avenues; and organizational operations to sustain advocacy. The tool describes capacities to which an organization should aspire if it wants to institutionalize its advocacy work. Sample measures include "the organization has a well written agenda, adopted by its board, that identifies the organization's priorities (such as issue priorities) for legislative and other types of advocacy."

Tools and Frameworks

Born Learning Washington Monthly Media Tracking Form

December 5, 2007

Page 24This is a tool that measures strengthened base of media support. It tracks the amount of campaign-generated media visibility that has been developed - noting dimensions such as type of media (TV, radio, print, other) and type of placement (PSA, news story, op-ed, programming).

Campaign Champions Data Collection Tool

December 5, 2007

This is a tool that measures strengthened base of public support. This form tracks and measures the number of "champions" (people who take actions to advance the public will) engaged and the actions these champions undertake as a part of the Born Learning campaign.

Learning from Clients: Assessment Tools for Microfinance Practitioners

June 6, 2007

SEEP Loan and Savings Use InterviewThis tool is a qualitative 60-90 minute interview that attempts to measure the impact of lending. It identifies how microentrepreneurs and low income borrowers use financial resources (both loan capital and savings) to carry out their economic strategies for their businesses and households.SEEP Impact SurveyThis tool assesses the impact of microenterprise programs, primarily at the enterprise, household and individual levels. It uses a cross-sectional impact survey.SEEP Empowerment InterviewThis tool uses a qualitative interview to identify the impact of women's access to and use of microloans. It identifies changes in women's self-esteem, control over resources, skills, household relationships, and status within their communities.SEEP Client Satisfaction Focus GroupThis tool uses a focus group discussion to inform and improve program effectiveness by identifying clients' satisfaction and dissatisfaction with its specific elements.SEEP Client Exit SurveyThis tool attempts to inform and improve program effectiveness. It seeks to determine why and when clients leave the program, what clients think about the program strengths/weaknesses, and what they perceive the program's impact to be. It takes the form of a quantitative survey with standard questions and pre-determined answer categories in the form of an individual interview requiring 15-20 minutes to administer.

Tools and Frameworks

HAP 2007 Humanitarian Accountability and Quality Management Standard

January 30, 2007

The HAP Standard is a tool that seeks to measure: 1) Accountability and quality commitments made by an aid agency; 2) Quality management system–the processes used by the aid agency to achieve the commitments made; 3) Quality of service–as defined by disaster survivors, affected communities, partners, aid practitioners and other specified stakeholders. In order to achieve certification, an agency will demonstrate that it meets the 6 benchmarks and 19 requirements in the HAP Standard. These cover the three areas mentioned above, with specific attention to continual improvement. Designed for use by practitioners, researchers and donors, the Guide to the HAP Standard provides practical advice for improving the quality and accountability of humanitarian action. For agencies seeking HAP certification, it is an indispensible resource. The Additional Tools (in the Find Out More box) links directly to the Annex section of the Guide, which includes sample surveys, checklists, indicators, quality assurance tests, SWOT, and guidance on: conducting interviews, observations, focus groups; engaging communities; setting up feedback mechanisms; and holding lessons learned meetings.

Tools and Frameworks

How Do We Know We Are Making a Difference? A Community Alcohol, Tobacco, and Drug Indicators Handbook

January 1, 2005

This is a tool that measures improved policies using a form that records the changes in amount and location of alcohol advertising.

Tools and Frameworks

Intensity of Integration Tracking Form

March 1, 1996

This is a tool that measures strengthened alliances. This tracking form helps organizations capture how integrated their partnerships and alliances are ranging from information sharing and communication to formal consolidation and integration.