More ways to engage:
- Add your organization's content to this collection.
- Send us content recommendations.
- Easily share this collection on your website or app.
34 results found
Charting Impact is a common framework that allows staff, boards, stakeholders, donors, volunteers, and others to work together, learn from each other and serve the community better. It complements planning, evaluation, and assessment that organizations already undertake, and can be used by nonprofits and foundations of all sizes and missions. Charting Impact is a joint project of Independent Sector, GuideStar, and the BBB Wise Giving Alliance.This discussion guide explains the five questions at the heart of Charting Impact and offers guidance for developing your organization's responses to them. This material should help your discussions with key members of your organization as you consider how to communicate your goals, strategies, capabilities, progress indicators, and accomplishments through this framework.
Neighborhood Health Status Improvement- Launched in 2008 · Asset based, resident driven, locally focused- Emphasis on improving the physical, social, and economic environments of neighborhoods
In taking stock of the landscape, this paper promotes a convergence of methods, building from both the impact investment and evaluation fields.The commitment of impact investors to strengthen the process of generating evidence for their social returns alongside the evidence for financial returns is a veritable game changer. But social change is a complex business and good intentions do not necessarily translate into verifiable impact.As the public sector, bilaterals, and multilaterals increasingly partner with impact investors in achieving collective impact goals, the need for strong evidence about impact becomes even more compelling. The time has come to develop new mindsets and approaches that can be widely shared and employed in ways that will advance the frontier for impact measurement and evaluation of impact investing. Each of the menu options presented in this paper can contribute to building evidence about impact. The next generation of measurement will be stronger if the full range of options comes into play and the more evaluative approaches become commonplace as means for developing evidence and testing assumptions about the processes of change from a stakeholder perspective– with a view toward context and systems.Creating and sharing evidence about impact is a key lever for contributing to greater impact, demonstrating additionality, and for building confidence among potential investors, partners and observers in this emergent industry on its path to maturation. Further, the range of measurement options offers opportunities to choose appropriate approaches that will allow data to contribute to impact management– to improve on the business model of ventures and to improve services and systems that improve conditions for people and households living in poverty.
Various trends are impacting on the field of monitoring and evaluation in the area of international development. Resources have become ever more scarce while expectations for what development assistance should achieve are growing. The search for more efficient systems to measure impact is on. Country governments are also working to improve their own capacities for evaluation, and demand is rising from national and community-based organizations for meaningful participation in the evaluation process as well as for greater voice and more accountability from both aid and development agencies and government.These factors, in addition to greater competition for limited resources in the area of international development, are pushing donors, program participants and evaluators themselves to seek more rigorous – and at the same time flexible – systems to monitor and evaluate development and humanitarian interventions.However, many current approaches to M&E are unable to address the changing structure of development assistance and the increasingly complex environment in which it operates. Operational challenges (for example, limited time, insufficient resources and poor data quality) as well as methodological challenges that impact on the quality and timeliness of evaluation exercises have yet to be fully overcome.
Developmental evaluation (DE) has emerged as an approach that is well suited to evaluating innovative early-stage or market-based initiatives that address complex social issues. However, because DE theory and practice are still evolving, there are relatively few examples of its implementation on the ground. This paper reviews the practical experience of a monitoring and evaluation (M&E) team in conducting a developmental evaluation of a Rockefeller Foundation initiative in the field of digital employment for young people, and offers observations and advice on applying developmental evaluation in practice.Through its work with The Rockefeller Foundation's team and its grantees, the M&E team drew lessons relating to context, intentional learning, tools and processes, trust and communication, and adaption associated with developmental evaluation. It was found that success depends on commissioning a highly qualified DE team with interpersonal and communication skills and, whenever possible, some sectoral knowledge. The paper also offers responses to three major criticisms frequently leveled against developmental evaluation, namely that it displaces other types of evaluations, is too focused on "soft" methods and indicators, and downplays accountability.Through its reporting of lessons learned and its response to the challenges and shortcomings of developmental evaluation, the M&E team makes the case for including developmental evaluation as a tool for the evaluation toolbox, recommending that it be employed across a wide range of geographies and sectors. With its recommendation, it calls for future undertakings to experiment with new combinations of methods within the DE framework to strengthen its causal, quantitative, and accountability dimensions.
This tool is designed to help an organization determine its level of readiness for implementing organizational learning and evaluation practices and processes that support it. The instrument's results can be used to: 1. Identify the existence of learning organization characteristics; 2. Diagnose interest in conducting evaluation that facilitates organizational learning; 3. Identify areas of strength to leverage evaluative inquiry processes; 4. Identify areas in need of organizational change and development. The organization may use the results to focus its efforts on improving or further strengthening areas that will lead to greater individual, team, and organizational learning.
This guidance has been produced to assist applicants in preparing their detailed (Stage 2) applications to the Coastal Communities Fund.It provides guidance on identifying, assessing and monitoring the economic impact potential of projects using an Indicator Framework, against which individual projects will be appraised.
We felt we should begin our work together by crafting a common definition of "high-performance organization." We knew that without a thoughtfully developed, thoroughly vetted definition of "high performance," any call for raising performance in our sector would ring hollow. In addition to providing a common definition of "high performance," the PI also lays out in detail the seven organizational pillars that can help you achieve high performance. To crib from the late author Stephen Covey, these are the seven habits of highly effective organizations.We do not intend this document to be a manifesto. We hope it will be a North Star to guide leaders on a journey of continuous learning and improvement--so they can make as much difference as they possibly can for the people and causes they serve.
For the past two years, we've been collaborating with our assessment partners to establish a benchmark against which the performance of any student, school or region can be mapped. The scale leverages a nationwide, representative sample of learning competencies prevalent across different school systems. And yet our commitment to evolving and implementing rigorous learning assessment frameworks has never been an end in itself: our primary goal is to use the data and insights to improve learning outcomes.Our work in education assessments—and the benefcial work being done by others in the field—will be most useful if communicated to the broader education community in India. Our assessment partners have prepared reports and conducted workshops to share their findings and explain how our investees can best bring about change in the classroom to enhance academic outcomes.
The Compass is your guide created by Centre for Social Impact to navigating social outcomes and impact measurement. This guide is for everyone working towards the creation of positive social impact in Australia and who wants to know if they are making a difference. The Compass explores and explains key topics, concepts, questions and principles of outcomes measurement.
This document is Part 1 of a Guide to Network Evaluation and offers the field's current thinking on frameworks, approaches and tools to address practical questions about designing and funding network evaluations. It was developed along with a casebook Evaluating Networks for Social Change: A Casebook that provides profiles of nine evaluations that detail key questions, methodologies, implementation and results while expanding what is known about assessment approaches that fit how networks develop and function. What will you learn in the guide? · How an evaluation can help a network function more effectively and promote network health · Elements of a network that can be evaluated · Approaches, methods and tools for evaluating networks · How to design a network evaluation that fits the? network type and investment (e.g., size, stage of development; issue focus) · Key questions to ask in a network evaluation · Examples of network evaluations and what has been learned from them
The casebook is part 2 of of a guide to network evaluation. This document provides profiles of nine evaluations that detail key questions, methodologies, implementation and results while expanding what is known about assessment approaches that fit how networks develop and function. It was written as a complement to a framing paper: The State of Network Evaluation, which offers the field's current thinking on frameworks, approaches and tools to address practical questions about designing and funding network evaluations.What will I learn in the guide?· How an evaluation can help a network function more effectively and promote network health· Elements of a network that can be evaluated· Approaches, methods and tools for evaluating networks· How to design a network evaluation that fits the? network type and investment (e.g., size, stage of development; issue focus)· Key questions to ask in a network evaluation· Examples of network evaluations and what has been learned from them