This article was originally published on the Results for Development Blog
I’ll be honest—choosing the greatest moments from our evaluation and learning work in the education sector wasn’t easy. In the last 12 months, we’ve been busy doing exciting, challenging, innovative work alongside smart, committed partners. But when it came down to it, there were certain things that stood out, and I managed to whittle down a long list of favorites into four 2016 highlights:
#1 Reading to my son
My husband and I had a baby boy in August. At his two-week doctor’s appointment, our pediatrician ended the appointment with a plea: “Please, please, if you have only one routine, make it that you read to your baby, every day.” The doctor probably wasn’t expecting the tangent that followed, in which I told him about the work we’ve been doing in partnership with Pearson and Worldreader in Delhi, India. I told him that we are experimenting with how to best encourage caregivers to read to their very young children: we’re testing the use of health clinics, youth groups, and primary schools as channels for teaching parents, older siblings, and community leaders about the value of reading to children well before they are literate or even verbal.
In the six months since experimentation began, we’ve uncovered good (and not-so-good) practices about the nitty-gritty details of changing people’s attitudes and behaviors about reading. The data is rolling in, and it’s telling us that mothers, fathers, sisters, brothers, aunts, and uncles are reading to their little ones. I’m reminded of this every time I sit down to read with mine; I can’t help but think that the learning curve for our work in Delhi is nearly as steep as it is for first time parents—and how good it feels to learn so much so quickly. Which leads to my next favorite thing…
#2 Learning we were wrong (sooner rather than later)
One of the core tenets of R4D’s evaluation and learning work is that it is always worth taking the time to explicitly state our assumptions about why we think a program will work, and to test those assumptions in the field. Doing so ensures that our partners are on the right track before resources are spent implementing a model that doesn’t work.
Here’s an example from Delhi: We assumed that one-on-one interactions were more likely than group sessions to encourage caregivers to read more to young children, because the messaging could be tailored to the individual, there wouldn’t be distractions, and the caregiver would have the chance to ask questions and get tips on how to engage his or her child. But we decided to test this assumption by implementing both one-on-one and group interactions. The data collected during our testing phase revealed that caregivers who attended group sessions were actually more likely to become frequent readers.
We believe (based on qualitative research) that this is because the group sessions create positive peer pressure and “buzz” about the app and about reading in general. Luckily, we learned this after just a few months, and local partners have been able to adjust their outreach efforts, combining initial group sessions with one-on-one follow-up interactions. This explains why my next favorite thing is …
Students at a Rising Academy school in Sierra Leone. ©Results for Development/Jean Arkedis
#3 Rapid prototyping, failing fast, and other bloggy jargon
R4D got its hands dirty this year as we worked with partners to quickly test tools and approaches before rolling them out on a large or long-term scale. In Sierra Leone, we are working with Rising Academies, a network of low-cost private schools, to test approaches to improving students’ reading skills. Before you can improve a child’s reading ability, you need to know his or her current reading ability. A reading assessment was chosen by teachers and program managers that seemed best suited to their students’ reading levels; but through rapid prototyping, we discovered that the assessment was actually pegged too low to accurately capture most students' ability.
In other words, students had already made large literacy gains and “outgrown” the assessment. We learned this in just a few days of testing, and were able to revise the assessment. Without this rapid feedback, the assessment would have been rolled out to hundreds of students in several schools, producing data of limited value.
Using these findings, we worked with our partners at Rising Academy Network to design and test a set of reading interventions. Which brings us to my final, and most favorite thing …
#4 People who are willing to change course
When we talk about the kind of partners we look for in R4D’s evaluation and learning work, our number one criterion is that they be willing to change, shift, adapt, tweak and even start over.
Our partners are incredible examples of this—willing to put in more legwork early on, to slow down on the frontend—in order to get to a more impactful program in the long run. When I talk to prospective partners about our work, it seems that this philosophy is becoming more and more common. It helps that some funders are getting on board, too, by encouraging reflection and adaptation in addition to (or instead of) reporting against predetermined benchmarks.
As [2016 has drawn] to a close, I look forward to spending 2017 with more people and organizations who are not just willing, but excited to try, reflect and adapt.
Molly Jamieson Eberhardt is a program director at Results for Development (R4D). She leads the portfolio of learning and evaluation work in the organization’s Global Education practice. This includes working with programs to embed rigorous monitoring and evaluation methods into their program design and implementation efforts to facilitate data-driven decision making. Molly works with implementing partners to develop research plans and facilitates processes to translate the findings into improved program design and impact.
Photo Credits (top to bottom): Worldreader ; R4D.