Brief psychological and behavioral interventions are popular in education, with promises of big impacts on academic outcomes for students. However, these kinds of brief interventions nearly all suffer from the same problem: generalizability. It’s one thing to demonstrate that an intervention works in a particular class, school, or sample, it is another beast entirely to claim that an intervention is effective across education broadly.
So, is education hackable? Can we really improve student’s education outcomes with a 30-minute video or activity?
Unfortunately, probably not.
A new study published in the Proceedings of the National Academy of Sciences tested how different behavioral science interventions fared in massive online open courses, or MOOCs – a rigorous test of generalizability with around a quarter million students involved across the world. The overall study design was iterative whereby pilot-type results for interventions were done first, followed by a year of randomly embedding the interventions in MOOCs, then they revised and registered their hypothesis for year two of randomly embedding the interventions in the MOOCs.
The key outcome metric was course completion rates, and the interventions were “plan making” (planning out their coursework at the beginning of the term), “mental contrasting with implementation intentions” (planning to overcome obstacles), “social accountability” (identifying people to check in with regularly about their course progress), and “value relevance” (write about how the course is connected with a personal important goal or value).
What was the result? That no single intervention had beneficial effects on course completion rates across courses and countries, and the effect sizes were in some cases an order of magnitude smaller than initial pilot-like studies suggested. Instead, the effects of interventions varied by country type (e.g., individualistic, developed), or whether the intervention occurred in the first year or the second year of the study. Interventions such as “plan making” only increased course engagement for the first week, but had no long-term effects. In some cases, the “value relevance” intervention had negative effects in some courses depending on country-context. Overall, the results show that noise and small, inconsistent effects that are highly context dependent are the norm in this kind of intervention work.
So, if these types of psychological-behavioral interventions are highly-context dependent, then what we need to do is identify the students that would benefit from the intervention and then deliver to them the intervention, rather than random or blanket assignment of interventions to all students. In theory this is a great solution, but one that is not easily executed.
A key context variable identified in the research was what the authors call the ‘global gap’ defined by courses in which course completion rates differ between less developed countries and more developed countries; interventions had positive effects on students in less developed countries but only if the courses had a global gap. The problem here is that the global gap index for a course is only known after the course has run, and is inconsistent across time. A predictive model the researchers developed couldn’t identify which courses would have a global gap the following year better than chance. Moreover, identifying individual students to target with interventions using machine-learning techniques did not increase course completion rates above what would be the case if interventions were randomly assigned.
The results of this massive research effort are important for educators and institutions. This research confirms what many educators and scientists know from experience: that context matters. But these results shouldn’t be touted as a “See! Context matters! We just need to individualized interventions!” because this study shows that we have no idea how to actually do that yet at scale. Yes, we need to be more intentional about educational interventions and more holistic in the ways in which we strive to improve academic outcomes for students, but saying that is the solution and actually being able to do it are two very different things. As higher education continues to move online at scale, we need to shy away from hackable, short-sighted “solutions” and focus on holistic, personalized and responsive education.