More ways to engage:
- Add your organization's content to this collection.
- Send us content recommendations.
- Easily share this collection on your website or app.
57 results found
A new Step-by-Step Guide to Evaluation released in November 2017 for grantees, nonprofits and community leaders is a successor for the original Evaluation Handbook that was published in 1998 and revised in 2014. The new guide is available here by clicking Download PDF. The original handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have direct responsibility for the ongoing evaluation of W.K. Kellogg Foundation-funded projects.Increasingly, we have targeted our grantmaking by funding groups of projects that address issues of particular importance to the Foundation.The primary purpose for grouping similar projects together in "clusters" is to bring about more policy or systemic change than would be possible in a single project or in a series of unrelated projects. Cluster evaluation is a means of determining how well the collection of projects fulfills the objective of systemic change. Projects identified as part of a cluster are periodically brought together at networking conferences to discuss issues of interest to project directors, cluster evaluators,and the Foundation.
We researched the latest developments in theory and practice in measurement and evaluation. And we found that new thinking, techniques, and technology are influencing and improving practice. This report highlights 8 developments that we think have the greatest potential to improve evaluation and programme design, and the careful collection and use of data. In it, we seek to inform and inspire—to celebrate what is possible, and encourage wider application of these ideas.
This is a tool for obtaining feedback on a program's perception by various stakeholders (it can be applied at different points along the development value chain, between funders and grantees, and between organizations and their primary constituents). It uses a questionnaire to collect perceptions from organizations' constituents on key aspects of the organizations' performance. The questionnaire is administered simultaneously to a comparable constituency group for a cohort of similar organizations.
Charting Impact is a common framework that allows staff, boards, stakeholders, donors, volunteers, and others to work together, learn from each other and serve the community better. It complements planning, evaluation, and assessment that organizations already undertake, and can be used by nonprofits and foundations of all sizes and missions. Charting Impact is a joint project of Independent Sector, GuideStar, and the BBB Wise Giving Alliance.This discussion guide explains the five questions at the heart of Charting Impact and offers guidance for developing your organization's responses to them. This material should help your discussions with key members of your organization as you consider how to communicate your goals, strategies, capabilities, progress indicators, and accomplishments through this framework.
This paper marks the launch of a new IVMF series focused on the critical topics of program evaluation, performance measurement, and evidence-based practice (EBP). The purpose of the series is to inform the broader community of veteran and military family serving organizations by highlighting examples of veteran and military serving organizations employing various methods of EBP, program evaluation, and assessment. By highlighting leading practices across the U.S., this series aims to promote learning and greater impact in service delivery across our nation's evolving and maturing community of veteran and military organizations.This case illustration highlights the evaluation efforts of the rising veteran and military serving organization Team, Red, White & Blue (Team RWB). Team RWB is a 501(c)(3) nonprofit organization founded in 2010 with the mission of enriching the lives of America's veterans by connecting them to their communities through physical and social activity. Despite its relative youth, in 2014, the George W. Bush Institute's (GWBI) Military Service Initiative and the IVMF both identified Team RWB as a leading organization in building a robust measurement and evaluation program. The paper highlights how Team RWB integrates theory and research to drive its programming as an evidence-based wellness intervention and, in turn, produce data to inform its own organizational practice.Key HighlightsTeam RWB is an organization that values, at all levels, trust and transparency with its partners, funders, and community. This culture -- embodied by the 'Eagle Ethos' of positivity, passion, people, community, camaraderie, and commitment -- exists throughout the organization from the senior executive down to the community level.Research and evaluation of RWB's programs is and will remain vital to communicating its impact and improving how it targets resources to improve and grow its programs. The Team RWB "Eagle Research Center" is building an evidence base by quantitatively measuring its outcomes and using data to improve its program delivery.More than 1,800 veterans surveyed in 2014 and 2,500 surveyed in 2015 self-reported increases in creating authentic relationships with others, increasing their sense of purpose, and improving their health, by participating in Team RWB. Veterans also noted that participating in Team RWB had indirect benefits in their family relationships and work. Improvements on these dimensions contribute to an enriched life, with more program engagement leading to more enrichment.Team RWB achieves these results through local, consistent, and inclusive programs. The chapter and Community programs provide opportunities for physical, social, and service activities. The Leadership Development Program is comprised of national athletic and leadership camps, and a newly launched tiered leader development program.
This free digital resource provides a clear and systematic guide for managers of an evaluation, whether this is being done by an external evaluator, an internal team or a hybrid team. In addition to guidance it provides , links to further detail and examples as required. A particular feature is the GeneraTOR which prompts for particular information to produce a draft Terms of Reference document which can be shared, reviewed and finalised with other stakeholders.
At Big Lottery Fund, we work with the sector, those we fund, other funders and policy-makers to deliver high-quality projects and share best practice and learning.Doing this together helps us to maximise the impact of our work. It also helps us to achieve our mission, to support people and communities most in need.One way we do this is through evaluation. We believe evaluation is a key part of any project that is serious about making a real difference. This opinion is not just unique to us. Most funding agencies place an emphasis on understanding what impact projects may or may not make and understanding why.This guide will help you think about the benefits of evaluating your project and how to get started. It is not meant to be a comprehensive 'how to' document.At the end of this guide, we provide you with links to some organisations that provide more in-depth information about how to conduct an evaluation and potential tools you could use.
This report analyzes the progress of these districts in implementing the fourth key component, evaluation and support systems aligned with the district-adopted standards for leaders. Consistent with the initiative's philosophy that evaluations can be a positive source of guidance for improving practice, districts have agreed to provide novice principals with support tailored to their needs, as identified by evaluations. The ultimate goal of this support—which includes support from supervisors, coaching or mentoring, and professional development—is to strengthen principals' capacity to improve teaching and learning.
As part of your grant application you will need to construct a series of "output" and "outcome" performance metrics that describe what success will look like at the end of the grant period and how it will be measured. This brief guide is intended to help you build rigorous and specific metrics that will become the basis of future performance reporting to the Foundation.
To understand best and next practice in measurement and evaluation (M&E) in philanthropy, we interviewed and conducted research on more than 40 social sector and governmental organizations. We talked with M&E leaders at foundations and other organizations considered to be highperforming in M&E, as well as with field specialists with experience across a number of organizations or deep expertise in relevant areas. Our research focused on four central categories of M&E design: structure, staff, methodology and processes.Through our research, we uncovered several design characteristics that seem common to high-performing M&E units. These include: an M&E leader with high positional authority and broad expertise, methodological diversity, a focus on learning, and an evaluation focus beyond individual grants.We also found a number of design characteristics for which there is no one-size best-in-class M&E design. Instead, the aim should be to design an M&E unit that is the right fit for the organization in terms of purpose (function) and in keeping with organizational structure and culture. Therefore, to determine the best design for M&E within an organization, it is critical for that organization to be clear on its measurement purpose and to be clear-eyed on its culture.
This paper is part of a series supported by The Rockefeller Foundation's Evaluation Office to explore the implications for evaluation of new development challenges in a rapidly changing world, new financing mechanisms beyond aid, new technologies and a host of new actors.
Many of funders are already committed to evaluation as a tool to improve their programs and strategies, and are seeking ways to engage in this learning process in partnership with grantees. One critical aspect of understanding and growing the impact of grantmaking dollars is supporting the capacity of grantees to assess their progress and adopt a learning for improvement mindset. Many nonprofits already collect and analyze data on program performance and hope to develop more comprehensive learning agendas. However, the nonprofit sector continues to wrestle with finding the resources, time and space to take evaluation to its full potential. This Smarter Grantmaking Playbook piece offers grantmakers ideas for how to provide evaluation capacity support more effectively.