Brief psychological and behavioral interventions are popular in education, with promises of big impacts on academic outcomes for students. However, these kinds of brief interventions nearly all suffer from the same problem: generalizability. It’s one thing to demonstrate that an intervention works in a particular class, school, or sample, it is another beast entirely to claim that an intervention is effective across education broadly.
So, is education hackable? Can we really improve student’s education outcomes with a 30-minute video or activity?
Unfortunately, probably not.
A new study published in the Proceedings of the National Academy of Sciences tested how different behavioral science interventions fared in massive online open courses, or MOOCs – a rigorous test of generalizability with around a quarter million students involved across the world. The overall study design was iterative whereby pilot-type results for interventions were done first, followed by a year of randomly embedding the interventions in MOOCs, then they revised and registered their hypothesis for year two of randomly embedding the interventions in the MOOCs.
The key outcome metric was course completion rates, and the interventions were “plan making” (planning out their coursework at the beginning of the term), “mental contrasting with implementation intentions” (planning to overcome obstacles), “social accountability” (identifying people to check in with regularly about their course progress), and “value relevance” (write about how the course is connected with a personal important goal or value).
What was the result? That no single intervention had beneficial effects on course completion rates across courses and countries, and the effect sizes were in some cases an order of magnitude smaller than initial pilot-like studies suggested. Instead, the effects of interventions varied by country type (e.g., individualistic, developed), or whether the intervention occurred in the first year or the second year of the study. Interventions such as “plan making” only increased course engagement for the first week, but had no long-term effects. In some cases, the “value relevance” intervention had negative effects in some courses depending on country-context. Overall, the results show that noise and small, inconsistent effects that are highly context dependent are the norm in this kind of intervention work.
So, if these types of psychological-behavioral interventions are highly-context dependent, then what we need to do is identify the students that would benefit from the intervention and then deliver to them the intervention, rather than random or blanket assignment of interventions to all students. In theory this is a great solution, but one that is not easily executed.
A key context variable identified in the research was what the authors call the ‘global gap’ defined by courses in which course completion rates differ between less developed countries and more developed countries; interventions had positive effects on students in less developed countries but only if the courses had a global gap. The problem here is that the global gap index for a course is only known after the course has run, and is inconsistent across time. A predictive model the researchers developed couldn’t identify which courses would have a global gap the following year better than chance. Moreover, identifying individual students to target with interventions using machine-learning techniques did not increase course completion rates above what would be the case if interventions were randomly assigned.
The results of this massive research effort are important for educators and institutions. This research confirms what many educators and scientists know from experience: that context matters. But these results shouldn’t be touted as a “See! Context matters! We just need to individualized interventions!” because this study shows that we have no idea how to actually do that yet at scale. Yes, we need to be more intentional about educational interventions and more holistic in the ways in which we strive to improve academic outcomes for students, but saying that is the solution and actually being able to do it are two very different things. As higher education continues to move online at scale, we need to shy away from hackable, short-sighted “solutions” and focus on holistic, personalized and responsive education.
Thank you for writing this. These words and other conversations have challenged me to better articulate why I feel context is more important than ever in education research. MOOCs are certainly a special example where the variance in learner contexts can be overwhelming. That said, the call for context is about much more than simply how results are analyzed. It begins with what questions we ask in the first place. For example, I would never expect the same brief behavioral intervention to improve outcomes for an international audience in courses ranging from “from poetry to data science”. That’s just poor design. Machine learning will never fully compensate for poorly implemented interventions.
My greater concern is that as our field searches (or funds searches) for silver bullets, we overlook practical steps we might take to improve student outcomes. It brings to mind Michael Feldstein’s commentary on the Rise Framework: “This isn’t magic. It’s not a robot tutor in the sky. In fact, it’s almost the antithesis. It’s so sensible that it verges on boring”. How many “boring” solutions will we pass up as we search for the impossible?
Feldstein’s post: https://eliterate.us/carnegie-mellon-and-lumen-learning-announce-eep-relevant-collaboration/
LikeLiked by 1 person
Totally agree Kyle! I think, too, that we continuously try to find a “silver bullet” or what I call a “hackable solution” to education problems — “Do this 30 minute exercise and breeze thru college!”
IMO the component that is missing from online education at scale is the human component. Humans are intensely social learners, and the human teacher-student component needs to be integrated into online edu. Edtech is great and can greatly enhance our learning capabilities, but I don’t think that we have figured out how to match the responsive, personalized social component that comes from human teachers in online context yet.
LikeLike
Absolutely! Interviewing online students has highlighted for me how essential human interaction is to student success. Often they explain the key to their success is feeling accountable at biweekly phone calls with their mentor, but beyond accountability, so many other benefits that result from face to face interaction typically go unaccounted for online.
LikeLiked by 1 person