Education
Will AI Transform Educational Standards of Evidence?
The dangers of equivocal evidence for rapid AI innovations and policies.
Posted May 27, 2023 Reviewed by Vanessa Lancaster
Key points
- A heated debate rages among politicians about how to best tackle digital learning in classrooms.
- The policies are trying to simplify a complex and equivocal body of research.
- Embracing a plurality of impact measures with clear measures of the strength of evidence is the way forward.
The Swedish Minister of Education's announcement about rolling back the nation's digitization strategy caught international attention. Not because of the ministry’s U-turn (U-turns are the DNA of politics) but rather the minister’s claim that the new proposed digitization strategy was not "based on science."
The Swedish Minister is not the only one struggling with scientific evidence in children’s digital education. Indeed, after the COVID-19 online learning disappointment, many Western governments went through a pendulum of "paradigm shifts," promising to change how children learn with screens. Many commentators welcome AI as a force to transform individualized learning but worry that a lack of regulation threatens K-12 education. But there is also another possibility: namely that the ethical threats of AI will push governments to commit to funding the development and implementation of new education policies.
The advent of AI accelerates regulation
Generative AI brings significant changes and needs for regulation across all sectors, including education. As recently highlighted in EU Parliament AI regulation discussions, a risk-based type of regulation is dangerous for stifling innovation and often counterproductive. Instead, governments should follow specific guardrails for specific types of interactions that arise with AI. Well-defined policies are especially important for complex processes such as learning and digitization.
As the major review of research on children and screens concluded, when it comes to children's learning and technology, there are many variables to take into account.
Perhaps the biggest takeaway from the current status of the literature on children and screens is that the content, context, and characteristics of the material and the interactions supported through digital media have a large impact on children’s outcomes.
What works in education?
The complexity of learning and the various methods and disciplines used to study learning in classrooms have led to many divergent standards of “what works.” Educational clearinghouses, such as for example the What Works Clearinghouse in the U.S., are supposed to advise the public about which products or approaches should be used by schools. Yet, the latest review of educational standards used across major clearinghouses shows that different clearinghouses use divergent standards leading to divergent recommendations for evidence-based educational programs.
The issues of screen time measures, research, and regulation are multi-faceted, but political parties demand clear-cut answers. Without deep knowledge of children’s development, policymakers often rely on experts' views to choose one or the other approach. Researchers are not immune to bias and pressures for professional advancement. With digitisation touching everyone’s life, scientists of all disciplines and experts of all kinds have something to say about the topic. And when the time for developing policies is short, discussions of various factors can quickly descend into adversarial reports of benefits versus dangers and scientific disagreements.
Scientific disagreements
To be clear, disagreements are the bread and butter of science–scientific progress is made by upending previous thinking. But such thinking shifts happen after accumulating evidence, not after a few researchers disagree. With no agreed benchmark of evidence strength, reporting scientific disagreements can bias public opinion and the development of national policies, often with dramatic consequences. For example, if a meta-analysis summarizing the results of several studies gets as much media attention as a small-scale study with two children, a false balance of evidence is created.
In this case, which experts are brought to the decision-making table will determine whether kids learn with screens or without. This is a worrying prospect for parents of all nations; not surprisingly, in many, people are protesting in the streets (e.g., in Norway against screens in schools).
Ways forward
The development of evidence-based, sector-wide standards takes time and resources. National governments are investing in AI innovation centres and expert reviews (e.g., the U.S. federal government funds 25 National AI Research Institutes). These initiatives could advance not only AI-specific but also educational standards of evidence–standards that specify which types of technologies, types of learners, and learning interactions, work best.
The standards need to be interpreted with an indicator of the strength of evidence, paying attention to various research studies independently verifying their claims. Clearly, given the dynamic landscape of technology innovation, the standards need to be regularly updated and continuously verified by research. Given how individualized and highly human-centric quality education is, such research needs to be implemented in collaboration between teachers, scientists, and technology developers. As the Global EdTech Testbed Network concluded, inter-sectoral collaboration is key to advancing children’s learning with educational technologies. By working together, the stakeholders can facilitate more informed and reasoned choices on national educational policies.
References
Wadhwa, M., Zheng, J., & Cook, T. D. (2023). How Consistent Are Meanings of “Evidence-Based”? A Comparative Review of 12 Clearinghouses that Rate the Effectiveness of Educational Programs. Review of Educational Research, 00346543231152262
Hassinger-Das, B., Brennan, S., Dore, R. A., Golinkoff, R. M., & Hirsh-Pasek, K. (2020). Children and screens. Annual Review of Developmental Psychology, 2, 69-92.