Short Cycle Innovation Driving Better International Education

By

A Global Learning Crisis

Globally, 263 million children do not go to school. Another 330 million youngsters go to school, but are not learning. Not only must the global education community expand children’s access to primary education, we must also confront a far more complex question: How do we ensure that children attending school receive a high quality education?

An Opportunity for Innovation

At Bridge International Academies (Bridge), we operate and support over 1000 schools across Nigeria, Kenya, Uganda, Liberia and India. Our model uniquely situates us to explore how different instructional practices affect children’s learning outcomes in real classroom environments.

The Instructional Design team at Bridge International Academies creates detailed lesson guides local, in-country teachers download onto their handheld computers. Every day, these teachers access the very latest lessons,  expertly designed by our educators to align with each country’s national curriculum. This scale and technology allow us to embrace and deploy the latest science of learning.

Because the instructional design team has the capacity to synchronize different lesson guides to different teacher computers, we are able to quantitatively evaluate new and innovative approaches to pedagogy and lesson design. Using this model, we are able to compare new methods of instruction with incumbent lessons via randomized A/B testing. We are able to do this in hundreds of comparable school settings within the same national network, and measure the outcomes for children using teacher-driven pupil assessment data. To illustrate the process and implications of such an approach, here is an example from our network of schools in Lagos, Nigeria.

By analyzing pre-existing data from the reading comprehension subtask of the Early Grade Reading Assessment, a widely recognized literacy assessment for primary school pupils, we identified a major challenge confronting our pupils; while they were successfully answering direct reading comprehension questions, they struggled to answer inferential questions based on the text.  In practice, this meant that after reading a text, pupils could answer a question like ‘What was Felix’s favorite food,’ but had difficulty answering a question like ‘What was the problem in the story?’

Intrigued by a growing body of academic literature supporting the use of interactive read-alouds in early-grade literacy instruction, we set out to evaluate if this approach could work in our own context. Interactive read-alouds, in which the teacher models the process of inferential thinking as a component of reading together in class, had some evidential basis for improving pupils’ outcomes, but no explicit evidence supporting a relationship between interactive read-alouds and improvement in children’s inferential thinking.

Once we had isolated the most pressing challenge and a potential solution to the problem, we identified the central aim of the intervention: to improve inferential thinking skills among early-grade pupils through the design and administration of an interactive read-aloud.

From here, the lesson design team in Lagos and Boston designed an intervention, grounded in the existing literature on interactive read-alouds, which might improve inferential thinking ability. We adapted the approach to our own context by employing an interactive read-aloud that could be used within our model of teacher guides. We then integrated this within our existing reading program, where the interactive read-aloud replaced some of the pupil’s 45 minutes of e-reader independent daily reading practice.

Before we set about full scale testing of the use of interactive read-alouds, we piloted the new lesson guides and measurement tool with four Primary 2 teachers at different schools in Lagos. Each of these teachers worked in classrooms of about 30 pupils. These tests built upon earlier pilots conducted with a similar program in Bridge schools in India. Working closely with our in-country academic field-team, we experimented with different approaches to interactive read-alouds, iterated on these, and obtained teacher feedback on the instructional strategy itself.

One of the major questions was how to craft a read-aloud for teachers that clearly distinguished between the read-aloud (i.e. teacher reading the story aloud to pupils) and the think-aloud (teacher modeling the inferential thinking process). These pilots informed our understanding of how to effectively support teachers to deliver the read-alouds with expressive language, and how to model the inferential thinking strategy using analytic language. Pilots also clarified the correct balance between the read-aloud itself and the interactive pupil questions that intersperse the read-aloud.

Finally, they highlighted particular linguistic aspects of the read-aloud that were confusing for teachers to follow and deliver. We also piloted and refined a measurement tool capable of detecting subtle changes in inferential thinking ability among pupils. Such piloting ensured that we had a measurement tool consisting of valid items and that was sufficiently discriminating.

Once the design of the lesson and measurement tool were finalized, we began the experiment. Half of Primary 2 teachers in Lagos (33 in total) received the incumbent teacher guides for two months, in which pupils engage in sustained independent reading using the e-readers. The other half of teachers received new teacher guides, which empowered them to utilize an interactive read-aloud during the same period. Teachers in each group delivered these lessons for two months. During this time, our academic field team conducted daily qualitative lesson observations including video to supplement the quantitatively measured pupil outcomes.

One unique element of our system for innovation is our capacity to evaluate outcomes in a streamlined, teacher-driven fashion. By way of explanation, under non-Bridge conditions, experiments such as this would require significant resources to assess children pre and post intervention across hundreds of participating schools. However, due to our technology, individual teachers can enter pupil test data into their teacher computers, which we can download and analyze to evaluate the intervention. This is supplemented by data on attendance and lesson completion, plus the field team’s qualitative lesson observations, to understand the success or failure of the intervention.

Lessons learned

So what did we learn? We found early indicators that pupils who received interactive read-alouds scored marginally higher (~6%) on the inferential thinking test than their counterparts who engaged in sustained independent reading. But given the limited statistical power, this difference was within the margin of error. We also found that this approach, delivered through our lesson guides, was accessible to teachers and was implemented with a high degree of fidelity. Qualitative observations added fascinating nuance to the picture, including instances in which pupils literally gasped in anticipation as teachers engaged in an interactive read-aloud.

This test taught us not only about the potential value of the interactive read-aloud in improving inferential thinking, it showed us ways to build upon the innovation process itself. First, we appropriately started with a smaller pilot of the intervention. But the relatively small sample size (approximately 65 schools) limited the statistical power of the evaluation, and as a result limited the generalizability of these results. While the results could not immediately justify making a change at scale, they inform subsequent evaluations using a more robust sample size. Second, we learned that this short-cycle innovation process fell short of the time necessary for such an intervention to result in a statistically significant effect on pupil outcomes.

Subsequent tests in Nigeria and elsewhere have built upon these lessons learned: now that we have refined our approach through our smaller-scale pilots, we are running longer evaluations (from two to six months) with larger sample sizes (in Kenya, for instance, within 327 of our schools) to obtain more statistically valid data.

Finally, we have gone on to partner with leading academics in the field of learning science and educational economics to design and conduct these innovation cycles. Through these collaborations with premier educational scholars, we have strengthened the study design of our experiments to improve statistical power and where an intervention has been effective, enable us to confidently implement new methods across our school network at scale. Importantly, these partnerships will enable us to more actively and robustly contribute to the global conversation about how we can collectively improve the quality of instruction for 330 million children who are in school, but who are not learning.

We remain intellectually curious about learning processes, just as these Nigerian pupils are curious to learn.

Working on solving this global issue is a privilege, and it’s encouraging to see the tremendous final impact of the work on boys and girls in classes across Africa. Overall, we are amassing evidence that this iterative approach to improving learning is effective and efficient. The independent government exams our children sit and excel in are testament to the value-add pupils receive inside a Bridge school.  

Thank you to Timothy Sullivan for contributing this piece. He is the Instruction Design Director at Bridge International Academies. Previously, he worked as a teacher and administrator on the Pine Ridge Indian Reservation, South Dakota and in Chuuk, Micronesia.

If you’re excited about the potential of lean startup for social good, check out the upcoming release of Lean Impact: How to Innovate for Radically Greater Social Good (Wiley, Oct 10). Also, check out the rest of our blog series for more inspiring stories.