Know of content that should be considered for this collection? Please suggest a report!
"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0
"GenY Unfocus Group - KP Digital Health 47613" by Ted Eytan licensed under CC BY-SA 2.0
25 results found
This publication provides an overview of the impetus for the Equitable Evaluation Framework™ (EEF) and attempts to document early moments and first steps of engagement with U.S. philanthropic institutions — most often their research, evaluation and learning staff — whom we refer to as foundation partners throughout this publication. The themes shared in this publication surfaced through conversations with a group of foundation staff who have been part of the Equitable Evaluation Project, now referred to as the Equitable Evaluation Initiative (EEI), since 2017 as advisors, investment partners and/or practice partners.These are not case studies but insights and peeks behind the curtains of six foundation practice partners. It is our hope that, in reading their experiences, you will find something that resonates, be it a point of view, a mindset or a similar opportunity in your place of work.
A new Step-by-Step Guide to Evaluation released in November 2017 for grantees, nonprofits and community leaders is a successor for the original Evaluation Handbook that was published in 1998 and revised in 2014. The new guide is available here by clicking Download PDF. The original handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have direct responsibility for the ongoing evaluation of W.K. Kellogg Foundation-funded projects.Increasingly, we have targeted our grantmaking by funding groups of projects that address issues of particular importance to the Foundation.The primary purpose for grouping similar projects together in "clusters" is to bring about more policy or systemic change than would be possible in a single project or in a series of unrelated projects. Cluster evaluation is a means of determining how well the collection of projects fulfills the objective of systemic change. Projects identified as part of a cluster are periodically brought together at networking conferences to discuss issues of interest to project directors, cluster evaluators,and the Foundation.
Neighborhood Health Status Improvement- Launched in 2008 · Asset based, resident driven, locally focused- Emphasis on improving the physical, social, and economic environments of neighborhoods
In taking stock of the landscape, this paper promotes a convergence of methods, building from both the impact investment and evaluation fields.The commitment of impact investors to strengthen the process of generating evidence for their social returns alongside the evidence for financial returns is a veritable game changer. But social change is a complex business and good intentions do not necessarily translate into verifiable impact.As the public sector, bilaterals, and multilaterals increasingly partner with impact investors in achieving collective impact goals, the need for strong evidence about impact becomes even more compelling. The time has come to develop new mindsets and approaches that can be widely shared and employed in ways that will advance the frontier for impact measurement and evaluation of impact investing. Each of the menu options presented in this paper can contribute to building evidence about impact. The next generation of measurement will be stronger if the full range of options comes into play and the more evaluative approaches become commonplace as means for developing evidence and testing assumptions about the processes of change from a stakeholder perspective– with a view toward context and systems.Creating and sharing evidence about impact is a key lever for contributing to greater impact, demonstrating additionality, and for building confidence among potential investors, partners and observers in this emergent industry on its path to maturation. Further, the range of measurement options offers opportunities to choose appropriate approaches that will allow data to contribute to impact management– to improve on the business model of ventures and to improve services and systems that improve conditions for people and households living in poverty.
For foundations, there are lots of questions to reflect on when thinking about which evaluation practices best align with their strategy, culture, and mission. How much should a foundation invest in evaluation? What can they do to ensure that the information they receive from evaluation is useful to them? With whom should they share what they have learned?Considering these numerous questions in light of benchmarking data about what other foundations are doing can be informative and important.Developed in partnership with the Center for Evaluation Innovation (CEI), Benchmarking Foundation Evaluation Practices is the most comprehensive data collection effort to date on evaluation practices at foundations. The report shares data points and infographics on crucial topics related to evaluation at foundations, such as evaluation staffing and structures, investment in evaluation work, and the usefulness of evaluation information.Findings in the report are based on survey responses from individuals who were either the most senior evaluation or program staff at foundations in the U.S. and Canada giving at least $10 million annually, or members of the Evaluation Roundtable, a network of foundation leaders in evaluation convened by CEI.
This paper marks the launch of a new IVMF series focused on the critical topics of program evaluation, performance measurement, and evidence-based practice (EBP). The purpose of the series is to inform the broader community of veteran and military family serving organizations by highlighting examples of veteran and military serving organizations employing various methods of EBP, program evaluation, and assessment. By highlighting leading practices across the U.S., this series aims to promote learning and greater impact in service delivery across our nation's evolving and maturing community of veteran and military organizations.This case illustration highlights the evaluation efforts of the rising veteran and military serving organization Team, Red, White & Blue (Team RWB). Team RWB is a 501(c)(3) nonprofit organization founded in 2010 with the mission of enriching the lives of America's veterans by connecting them to their communities through physical and social activity. Despite its relative youth, in 2014, the George W. Bush Institute's (GWBI) Military Service Initiative and the IVMF both identified Team RWB as a leading organization in building a robust measurement and evaluation program. The paper highlights how Team RWB integrates theory and research to drive its programming as an evidence-based wellness intervention and, in turn, produce data to inform its own organizational practice.Key HighlightsTeam RWB is an organization that values, at all levels, trust and transparency with its partners, funders, and community. This culture -- embodied by the 'Eagle Ethos' of positivity, passion, people, community, camaraderie, and commitment -- exists throughout the organization from the senior executive down to the community level.Research and evaluation of RWB's programs is and will remain vital to communicating its impact and improving how it targets resources to improve and grow its programs. The Team RWB "Eagle Research Center" is building an evidence base by quantitatively measuring its outcomes and using data to improve its program delivery.More than 1,800 veterans surveyed in 2014 and 2,500 surveyed in 2015 self-reported increases in creating authentic relationships with others, increasing their sense of purpose, and improving their health, by participating in Team RWB. Veterans also noted that participating in Team RWB had indirect benefits in their family relationships and work. Improvements on these dimensions contribute to an enriched life, with more program engagement leading to more enrichment.Team RWB achieves these results through local, consistent, and inclusive programs. The chapter and Community programs provide opportunities for physical, social, and service activities. The Leadership Development Program is comprised of national athletic and leadership camps, and a newly launched tiered leader development program.
In discussing EVALUATION AND IMPACT, it is easy to get caught up in a numbers game. There is tremendous pressure to report on how our work is making a difference, and scale ("Billions and billions served!") seems like the most expedient way to demonstrate why our organizations and our programs matter. When the goal is simple volume – for example, how many people can we move through this drive-in window? – the metrics can indeed be straightforward. But if the goal is changing social behaviors – how can we get students to stay in school, or how can we break the cycle of poverty, or how can we improve health outcomes? – the numbers can tell us a great deal, but rarely can they tell a complete story.
This report analyzes the progress of these districts in implementing the fourth key component, evaluation and support systems aligned with the district-adopted standards for leaders. Consistent with the initiative's philosophy that evaluations can be a positive source of guidance for improving practice, districts have agreed to provide novice principals with support tailored to their needs, as identified by evaluations. The ultimate goal of this support—which includes support from supervisors, coaching or mentoring, and professional development—is to strengthen principals' capacity to improve teaching and learning.
The latest news and occasional commentary about what's happening at the Foundation and around our great state.
A majority of grantmakers are struggling to make evaluation and learning meaningful to anyone outside their organizations. Not only is evaluation conducted primarily for internal purposes, but it is usually done by the grantmaker entirely on its own -- with no outside learning partners except perhaps an external evaluator -- and provides little value and may even be burdensome to the grantee. It may be that some funders do not consider expanding the scope of learning efforts beyond their own walls. Or perhaps the gap is driven by funding constraints or funder -- grantee dynamics. In any case, grantees and other stakeholders are critical partners in the achievement of grantmakers' missions and are therefore critical learning partners as well.In this publication, GEO offers actionable ideas and practices to help grantmakers make learning with others a priority. The publication includes stories about foundations that are learning together with a variety of partners, plus a discussion of the key questions that can help shape successful shared learning. It is based on research and interviews conducted from late 2013 to 2015, including extensive outreach to grantmakers, evaluation practitioners and others. The focus of GEO's inquiry: documenting the challenges facing grantmakers as they set out to learn with others, lifting up what it takes to do this work successfully and identifying grantmakers that show a commitment to learning together.
This paper bridges the academic literature and ordinary practice to show how nonprofit organizations, regardless of where they are on the spectrum of evaluation capacity, and regardless of their desire to conduct evaluation internally or use external consultants, can strengthen their ability to engage in and sustain an ongoing evaluation practice. These suggestions are not exhaustive; but they are meant to be practical, accessible, and realistically doable for most nonprofits.
How are funders evaluating the outcomes of the media productions and campaigns that they support? Over the past five years, this question has informed a growing array of convenings, reports and research initiatives within the philanthropic sector, driving the emergence of a small but increasingly visible field of analysts and producers seeking to both quantify and qualify the impact of public interest media.These examinations have stimulated debate among both funders and grantees. Calls for the creation of a single media impact metric or tool have been met with both curiosity and skepticism. Those in favor of impact analysis cite its strategic usefulness in this moment of myriad new and untested media platforms, the importance of concretely tying mission to outcomes, and the need to justify media investments rather than programmatic ones. Detractors raise concerns about how an excess of evaluation might stifle creativity, needlessly limit funding to those projects whose short-term impact can be conclusively proven, or simply bog grantees down in administrative tasks that require entirely different skills, as well as resources.However, these debates have taken place in somewhat of an information vacuum. To date, the conversation about media impact has been led by a limited group of foundations. Little substantive information is available about how a broader range of funders address questions of evaluation. This research project aims to help fill that gap.The report, Funder Perspectives: Assessing Media Investments explores the multiple and sometimes overlapping lenses through which grantmakers view media evaluation, and confirms that there are still many unanswered questions.