Developmental evaluation (DE) has emerged as an approach that is well suited to evaluating innovative early-stage or market-based initiatives that address complex social issues. However, because DE theory and practice are still evolving, there are relatively few examples of its implementation on the ground. This paper reviews the practical experience of a monitoring and evaluation (M&E) team in conducting a developmental evaluation of a Rockefeller Foundation initiative in the field of digital employment for young people, and offers observations and advice on applying developmental evaluation in practice.
Through its work with The Rockefeller Foundation's team and its grantees, the M&E team drew lessons relating to context, intentional learning, tools and processes, trust and communication, and adaption associated with developmental evaluation. It was found that success depends on commissioning a highly qualified DE team with interpersonal and communication skills and, whenever possible, some sectoral knowledge. The paper also offers responses to three major criticisms frequently leveled against developmental evaluation, namely that it displaces other types of evaluations, is too focused on "soft" methods and indicators, and downplays accountability.
Through its reporting of lessons learned and its response to the challenges and shortcomings of developmental evaluation, the M&E team makes the case for including developmental evaluation as a tool for the evaluation toolbox, recommending that it be employed across a wide range of geographies and sectors. With its recommendation, it calls for future undertakings to experiment with new combinations of methods within the DE framework to strengthen its causal, quantitative, and accountability dimensions.
- Creative Commons Attribution-NonCommercial 4.0 International License
- Linked data add horizontal_rule