More ways to engage:
- Add your organization's content to this collection.
- Easily share this collection on your website or app.
7 results found
In taking stock of the landscape, this paper promotes a convergence of methods, building from both the impact investment and evaluation fields.The commitment of impact investors to strengthen the process of generating evidence for their social returns alongside the evidence for financial returns is a veritable game changer. But social change is a complex business and good intentions do not necessarily translate into verifiable impact.As the public sector, bilaterals, and multilaterals increasingly partner with impact investors in achieving collective impact goals, the need for strong evidence about impact becomes even more compelling. The time has come to develop new mindsets and approaches that can be widely shared and employed in ways that will advance the frontier for impact measurement and evaluation of impact investing. Each of the menu options presented in this paper can contribute to building evidence about impact. The next generation of measurement will be stronger if the full range of options comes into play and the more evaluative approaches become commonplace as means for developing evidence and testing assumptions about the processes of change from a stakeholder perspective– with a view toward context and systems.Creating and sharing evidence about impact is a key lever for contributing to greater impact, demonstrating additionality, and for building confidence among potential investors, partners and observers in this emergent industry on its path to maturation. Further, the range of measurement options offers opportunities to choose appropriate approaches that will allow data to contribute to impact management– to improve on the business model of ventures and to improve services and systems that improve conditions for people and households living in poverty.
Various trends are impacting on the field of monitoring and evaluation in the area of international development. Resources have become ever more scarce while expectations for what development assistance should achieve are growing. The search for more efficient systems to measure impact is on. Country governments are also working to improve their own capacities for evaluation, and demand is rising from national and community-based organizations for meaningful participation in the evaluation process as well as for greater voice and more accountability from both aid and development agencies and government.These factors, in addition to greater competition for limited resources in the area of international development, are pushing donors, program participants and evaluators themselves to seek more rigorous – and at the same time flexible – systems to monitor and evaluate development and humanitarian interventions.However, many current approaches to M&E are unable to address the changing structure of development assistance and the increasingly complex environment in which it operates. Operational challenges (for example, limited time, insufficient resources and poor data quality) as well as methodological challenges that impact on the quality and timeliness of evaluation exercises have yet to be fully overcome.
Developmental evaluation (DE) has emerged as an approach that is well suited to evaluating innovative early-stage or market-based initiatives that address complex social issues. However, because DE theory and practice are still evolving, there are relatively few examples of its implementation on the ground. This paper reviews the practical experience of a monitoring and evaluation (M&E) team in conducting a developmental evaluation of a Rockefeller Foundation initiative in the field of digital employment for young people, and offers observations and advice on applying developmental evaluation in practice.Through its work with The Rockefeller Foundation's team and its grantees, the M&E team drew lessons relating to context, intentional learning, tools and processes, trust and communication, and adaption associated with developmental evaluation. It was found that success depends on commissioning a highly qualified DE team with interpersonal and communication skills and, whenever possible, some sectoral knowledge. The paper also offers responses to three major criticisms frequently leveled against developmental evaluation, namely that it displaces other types of evaluations, is too focused on "soft" methods and indicators, and downplays accountability.Through its reporting of lessons learned and its response to the challenges and shortcomings of developmental evaluation, the M&E team makes the case for including developmental evaluation as a tool for the evaluation toolbox, recommending that it be employed across a wide range of geographies and sectors. With its recommendation, it calls for future undertakings to experiment with new combinations of methods within the DE framework to strengthen its causal, quantitative, and accountability dimensions.
This paper is part of a series supported by The Rockefeller Foundation's Evaluation Office to explore the implications for evaluation of new development challenges in a rapidly changing world, new financing mechanisms beyond aid, new technologies and a host of new actors.
This best practice provides a framework for identifying relationships in order to evaluate initiatives' social impacts. It emphasizes the understanding by stakeholders of how exactly the enterprise will generate social impacts and highlights the causal relationships between actions, short term outcomes and long term outcomes.
This essay explores the development landscape, suggests new policy directions for international philanthropic organisations and examines the challenges involved in evaluating their development assistance efforts. Next, it complies with the well-established development effectiveness criteria of relevance, effectiveness, efficiency, and sustainability issued by the Development Assistance Committee of the OECD in order to sketch an evaluation agenda designed to enhance accountability, organisational learning and responsiveness to stakeholders' concerns within the philanthropic sector.
This is a best practice that provides the following guidelines a) conceptual: funders and grantees should align goals, assessment tools, and best practices b) operational: grantees and investors should acknowledge evaluation expenses as part of the cost of doing business, invest in measurement systems and tools, and develop examples of proven impact c) structural: each field and subfield should explore a range of possible outcome goals and best practices for measurement d) practical: a commitment to outcomes assessment can be a fundamental part of the management structure and organizational culture among funders and nonprofits.