Know of content that should be considered for this collection? Please suggest a report!
"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0
"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0
148 results found
This publication provides an overview of the impetus for the Equitable Evaluation Framework™ (EEF) and attempts to document early moments and first steps of engagement with U.S. philanthropic institutions — most often their research, evaluation and learning staff — whom we refer to as foundation partners throughout this publication. The themes shared in this publication surfaced through conversations with a group of foundation staff who have been part of the Equitable Evaluation Project, now referred to as the Equitable Evaluation Initiative (EEI), since 2017 as advisors, investment partners and/or practice partners.These are not case studies but insights and peeks behind the curtains of six foundation practice partners. It is our hope that, in reading their experiences, you will find something that resonates, be it a point of view, a mindset or a similar opportunity in your place of work.
Designed to help the social sector measure its impact in a responsible manner, the report, Impacting Responsibly, gathers insights from thought leaders in the fields of philanthropy, measurement, and evaluation in nine areas — impact capacity building, impact frameworks and standards, constituent feedback, current reporting burden, resource inequities, impact data ownership, roles and responsibilities, collaboration, and limits of quantitative evidence. The contributions also address questions such as: How can organizations of all sizes and budgets use impact data? How can they better engage those they serve through impact data? How should they handle privacy and data protection? And how can they collaborate to maximize what they can learn from impact data?
When grantmakers focus on learning for improvement, we use evaluation and learning to generate information and insights that will help us better understand both how we're doing in our work and how to improve. A focus on taking action based on what we learn ensures that we are engaged in strategic or applied learning. Our learning should be tied directly to the strategies we are pursuing and the decisions we are making.Learning in Philanthropy: A Guidebook provides a solid basis for thinking and talking about the next steps in our organization's learning work. The guidebook is designed to serve as a resource to help grantmakers answer critical learning questions and embed learning more deeply into the day-to-day work and cultures of our organizations.
In early 2017, ORS Impact evaluated and re-examined the David and Lucile Packard Foundation monitoring, evaluation, and learning (MEL) principles and practice. The purpose of this evaluation was to discover what works well, identify areas for improvement, and stimulate reflection and experimentation. While this report uncovered many examples of strong MEL practice across the Foundation it also highlighted opportunities for improvement. Research findings fed into Foundation decisions to update both internal and external MEL processes and requirements, including refinement of the Foundation's Guiding Principles for MEL.A key audience of this report include readers wrestling with how to best support MEL in philanthropic settings so that it can support greater learning and impact, such as MEL staff working inside foundations and external evaluators working with foundations.
The aim of this report is to contribute to field dialogue and learning about how to structure complex systems change strategies involving multiple partners.
Effecting social change in a rapidly changing political environment and an increasingly interconnected world requires foundations to adopt a learning orientation. Without continuous learning, grantmakers—and thus boards and trustees—are unaware about what is working where, with whom, and why, as well as what changes or refinements are needed in order to achieve the grantmakers' desired results.This toolkit provides a fresh set of resources for grantmaker CEOs, evaluation staff, and senior leaders to use to engage their boards and trustees in conversations about the importance of strategic learning in their decision-making and deliberation processes.
The Center for Effective Philanthropy (CEP) surveyed private and community foundation leaders regarding what they know about what is and isn't working in their foundations' efforts to achieve their goals. Drawing from 119 survey responses and in-depth interviews with 41 foundation CEOs, the report finds that while the majority of foundation CEOs believe they understand well what is working in their programmatic efforts, more than 40 percent believe their foundation is not investing enough time and money in developing that understanding.
This primer for monitoring and evaluation was developed jointly by Giving Evidence and Keystone to help a funder-client review and rethink their monitoring and evaluation practices. It was initially developed for a funder that had solicited our services to help them review and rethink their monitoring and evaluation practices, but we feel it is relevant for a broader audience of donors and implementers. The primer established a 4-level framework covering the monitoring of inputs and outputs, results monitoring, evaluation of grantee impact and evaluation of the contributions made to impact by the funders themselves.
Foundations create a lot more knowledge than they use, and most don't capture the full value of this data. Attached is our primer on trends in knowledge management. Learn what's changed, and discover easier ways to work. If you'd like to take advantage of machine learning and artificial intelligence to generate better insights, here are tips on how to design better systems and workflows for your team. It also includes a 9-stage scale to decide how sophisticated your current approach is, and where you can improve.
In January 2016, Mona Foundation received a grant from the Glaser Progress Foundation to support aprocess for defining Mona Foundation's theory of change and lay the groundwork for developing a frameworkto document impact. That goal was accomplished with Mona Foundation developing a theory of change,partnership framework, and strategic goals.In June 2017, the Glaser Progress Foundation awarded Mona Foundation a grant to identify the metrics andprocess for documenting change. A committee, made up of several external representatives with expertisein education and evaluation and two Mona Foundation staff provided guidance and feedback on the process.
Despite marked advances in the tools and methods for monitoring, evaluation, and learning in the social sector and a burgeoning number of bright spots in practice that are emerging in the field, there is nevertheless broad dissatisfaction across the sector about how data is -- or is not -- used.Reimagining measurement has engaged the field in thinking about where monitoring, evaluation, and learning is likely to head over the next decade. Over the course of extensive research and more than 125 conversations with leading foundation executives and program staff, evaluation experts, nonprofit leaders, data wonks, and other stakeholders, it became clear that there is a real divergence between the future people expect for monitoring, evaluation, and learning, and the future people hope for.
A new Step-by-Step Guide to Evaluation released in November 2017 for grantees, nonprofits and community leaders is a successor for the original Evaluation Handbook that was published in 1998 and revised in 2014. The new guide is available here by clicking Download PDF. The original handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have direct responsibility for the ongoing evaluation of W.K. Kellogg Foundation-funded projects.Increasingly, we have targeted our grantmaking by funding groups of projects that address issues of particular importance to the Foundation.The primary purpose for grouping similar projects together in "clusters" is to bring about more policy or systemic change than would be possible in a single project or in a series of unrelated projects. Cluster evaluation is a means of determining how well the collection of projects fulfills the objective of systemic change. Projects identified as part of a cluster are periodically brought together at networking conferences to discuss issues of interest to project directors, cluster evaluators,and the Foundation.