Need practical how-to info that aims to help you build your evaluation capacity? This collection includes suggested readings from our friends at BetterEvaluation, the Center for Evaluation Innovation, the Center for Effective Philanthropy, and Grantmakers for Effective Organizations as well as hand-picked content by Candid. Thousands of actual evaluations are  available for download.

Know of content that should be considered for this collection? Please suggest a report!

"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0

Search this collection

Clear all

77 results found

reorder grid_view

Shifting the Evaluation Paradigm: The Equitable Evaluation Framework

April 30, 2021

This publication provides an overview of the impetus for the Equitable Evaluation Framework™ (EEF) and attempts to document early moments and first steps of engagement with U.S. philanthropic institutions — most often their research, evaluation and learning staff — whom we refer to as foundation partners throughout this publication. The themes shared in this publication surfaced through conversations with a group of foundation staff who have been part of the Equitable Evaluation Project, now referred to as the Equitable Evaluation Initiative (EEI), since 2017 as advisors, investment partners and/or practice partners.These are not case studies but insights and peeks behind the curtains of six foundation practice partners. It is our hope that, in reading their experiences, you will find something that resonates, be it a point of view, a mindset or a similar opportunity in your place of work.

Impacting Responsibly

May 24, 2019

Designed to help the social sector measure its impact in a responsible manner, the report, Impacting Responsibly, gathers insights from thought leaders in the fields of philanthropy, measurement, and evaluation in nine areas — impact capacity building, impact frameworks and standards, constituent feedback, current reporting burden, resource inequities, impact data ownership, roles and responsibilities, collaboration, and limits of quantitative evidence. The contributions also address questions such as: How can organizations of all sizes and budgets use impact data? How can they better engage those they serve through impact data? How should they handle privacy and data protection? And how can they collaborate to maximize what they can learn from impact data?

Learning in Philanthropy: A Guidebook

May 12, 2019

When grantmakers focus on learning for improvement, we use evaluation and learning to generate information and insights that will help us better understand both how we're doing in our work and how to improve. A focus on taking action based on what we learn ensures that we are engaged in strategic or applied learning. Our learning should be tied directly to the strategies we are pursuing and the decisions we are making.Learning in Philanthropy: A Guidebook provides a solid basis for thinking and talking about the next steps in our organization's learning work. The guidebook is designed to serve as a resource to help grantmakers answer critical learning questions and embed learning more deeply into the day-to-day work and cultures of our organizations.

MEL Practice at the David and Lucile Packard Foundation: Evaluation in Support of Moving from Good to Great

January 31, 2019

In early 2017, ORS Impact evaluated and re-examined the David and Lucile Packard Foundation monitoring, evaluation, and learning (MEL) principles and practice. The purpose of this evaluation was to discover what works well, identify areas for improvement, and stimulate reflection and experimentation. While this report uncovered many examples of strong MEL practice across the Foundation it also highlighted opportunities for improvement. Research findings fed into Foundation decisions to update both internal and external MEL processes and requirements, including refinement of the Foundation's Guiding Principles for MEL.A key audience of this report include readers wrestling with how to best support MEL in philanthropic settings so that it can support greater learning and impact, such as MEL staff working inside foundations and external evaluators working with foundations.

Weaving Successful Partnerships: When Funders, Evaluators, and Intermediaries Work Together

January 23, 2019

The aim of this report is to contribute to field dialogue and learning about how to structure complex systems change strategies involving multiple partners.

Understanding & Sharing What Works: The State of Foundation Practice

November 8, 2018

The Center for Effective Philanthropy (CEP) surveyed private and community foundation leaders regarding what they know about what is and isn't working in their foundations' efforts to achieve their goals. Drawing from 119 survey responses and in-depth interviews with 41 foundation CEOs, the report finds that while the majority of foundation CEOs believe they understand well what is working in their programmatic efforts, more than 40 percent believe their foundation is not investing enough time and money in developing that understanding.

A Primer About Monitoring and Evaluation, for Funders and Implementers

October 1, 2018

This primer for monitoring and evaluation was developed jointly by Giving Evidence and Keystone to help a funder-client review and rethink their monitoring and evaluation practices. It was initially developed for a funder that had solicited our services to help them review and rethink their monitoring and evaluation practices, but we feel it is relevant for a broader audience of donors and implementers. The primer established a 4-level framework covering the monitoring of inputs and outputs, results monitoring, evaluation of grantee impact and evaluation of the contributions made to impact by the funders themselves.

Knowledge Management - A Primer

October 1, 2018

Foundations create a lot more knowledge than they use, and most don't capture the full value of this data. Attached is our primer on trends in knowledge management. Learn what's changed, and discover easier ways to work. If you'd like to take advantage of machine learning and artificial intelligence to generate better insights, here are tips on how to design better systems and workflows for your team. It also includes a 9-stage scale to decide how sophisticated your current approach is, and where you can improve.

Mona Foundation Monitoring Evaluation Framework: A Field-generated View of Measuring Sustained Change

March 2, 2018

In January 2016, Mona Foundation received a grant from the Glaser Progress Foundation to support aprocess for defining Mona Foundation's theory of change and lay the groundwork for developing a frameworkto document impact. That goal was accomplished with Mona Foundation developing a theory of change,partnership framework, and strategic goals.In June 2017, the Glaser Progress Foundation awarded Mona Foundation a grant to identify the metrics andprocess for documenting change. A committee, made up of several external representatives with expertisein education and evaluation and two Mona Foundation staff provided guidance and feedback on the process.

Reimagining Measurement: A Better Future for Monitoring, Evaluation, and Learning

December 1, 2017

Despite marked advances in the tools and methods for monitoring, evaluation, and learning in the social sector and a burgeoning number of bright spots in practice that are emerging in the field, there is nevertheless broad dissatisfaction across the sector about how data is -- or is not -- used.Reimagining measurement has engaged the field in thinking about where monitoring, evaluation, and learning is likely to head over the next decade. Over the course of extensive research and more than 125 conversations with leading foundation executives and program staff, evaluation experts, nonprofit leaders, data wonks, and other stakeholders, it became clear that there is a real divergence between the future people expect for monitoring, evaluation, and learning, and the future people hope for.

Quality Measurement and Accountability for Community-Based Serious Illness Care: Synthesis Report of Convening Findings and Conclusions

November 1, 2017

The movement of U.S. health care to value-based payment presents a critical opportunity to improve accountability for the quality of serious illness care, while constraining the growth of spending. The changing incentives in the health care system are driving innovation in the delivery of serious illness care in traditional Medicare, Medicare Advantage and commercial plans. Implementation of an accountability system for serious illness care is vital for ensuring that cost containment efforts do not result in undertreatment or worse quality of care for the seriously ill.In May 2017, the Gordon and Betty Moore Foundation convened 45 serious illness care experts and stakeholders - such as physicians, researchers, patient advocates, policy experts - in Banff, Alberta, Canada, to identify a path forward for building an accountability system for high-quality, community-based serious illness care programs. The group reached consensus on a definition of the serious illness population, the necessary components of an accountability system and guiding principles for quality measurement. In addition, convening participants identified a starter set of quality measures, future pathways for implementation of an accountability system and needed future research.

Global Innovations in Measurement and Evaluation

June 26, 2017

We researched the latest developments in theory and practice in measurement and evaluation. And we found that new thinking, techniques, and technology are influencing and improving practice. This report highlights 8 developments that we think have the greatest potential to improve evaluation and programme design, and the careful collection and use of data. In it, we seek to inform and inspire—to celebrate what is possible, and encourage wider application of these ideas.

Guidelines and Best Practices