Measurement innovations

At the GC-DWC, we undertake applied research that is focused on utility, feasibility, and rigor. Utility, at its core, deals with whether or not the research will be useful to make reasonable decisions about the activity, intervention, strategy, or program being implemented. Data about children’s learning and development needs to have a practical goal that allows practitioners and policy makers to learn about how their program affects children and what changes they can make to programming to improve children and adolescents’ development. Feasibility deals with the actual process of conducting research in low-resource and crisis contexts; it asks whether the process is doable given the logistical, operational, and systematic limitations that are in place.

Rigor deals with not only the validity and reliability of the research being conducted but also whether the thresholds of rigorous research are actually viable for practitioners and researchers working in low-resource, crisis-affected, and fragile contexts. Our team partners with practitioners, researchers, and policy-makers in various low-resource and crisis contexts to ensure that the research does not only add to the global evidence base about what works to address whole child development but also is usable, feasible, and rigorous for our partners in making programmatic and policy decisions.

To do this, the GC-DWC utilizes a number of qualitative, quantitative, and mixed methods research approaches during the development, pilot, proof-of-concept, and scale-up phase of programs focused on child development and learning. Below is a description of a selection of methods that we use.

REALM

Rapid Evaluation, Assessment, and Learning Methods (REALM) are systematic monitoring and evaluation strategies that employ an expeditious approach to program improvement and design by using timely, yet data-driven, actionable evidence that supports well-informed decision-making. Utilized and developed from a number of contexts and backgrounds, REALM strategies share a set of core characteristics but differ with respect to their context, and purpose. With origins in the humanitarian global health sector, REALM was originally intended for time- and resource-sensitive contexts that demand evidence of multi-sectoral impact in a short period of time. Unlike summative evaluations which assess the impact of a program overall, REALM assesses the impact of individual program components, changes, or alternatives by gathering data, analyzing findings, and taking action over a cycle of anywhere between a few weeks to a few months.

QuIP

Qualitative Impact Protocol (QuIP) is a qualitative approach that assesses the impact of interventions by collecting narrative statements from program participants. Through the use of open-ended and exploratory questions about changes in expected program outcomes, QuIP aims to disentangle possible sources of influence by avoiding questions that are specific to the programs being evaluated. In this way, QuIP provides an independent reality check which helps assess, learn from, and demonstrate the social impact of a program. In Haiti, the GC-DWC utilized QuIP to gauge the effectiveness of our work to: improve parents’ and teachers’ knowledge of childhood development (e.g., nurturance, care, nutrition, school readiness, and positive discipline) and social and emotional learning strategies.

RCTs

Randomized Control Trials (RCTs), are a widely used method of measuring program efficacy. RCTs are able to better reduce bias by randomly assigning participants into a treatment group and a comparison group. The GC-DWC has effectively utilized RCTs to measure the impact of our programming such as with a 2016 randomized control trial of curriculum in Haiti which revealed statistically significant gains in 7 of the 8 Early Grade Reading Assessment indicators, including a 143% increase in letter recognition and a 49% increase in reading fluency.