Skip to main content

Verified by Psychology Today

Education

Clearing the Confusion on EdTech Evidence

EdTech needs different evidence narratives for different imperatives.

Key points

  • EdTech evidence needs a collaboration between researchers, teachers, and designers, with the child’s learning at the heart.
  • EdTech evidence combines children’s and teachers’ experiences with the learning impact of the technologies’ content and design.
  • Different evidence types serve different purposes and imperatives.

In the aftermath of the pandemic, Educational Technology (EdTech) circles have been swamped with the evidence narrative. Surprisingly many EdTech companies have been caught out by a simple question: What is the evidence behind your solution? A safe place to start is to define what we mean by evidence.

Evidence refers to research findings, an evaluation, or a study that is peer-reviewed and ideally, published in a scientific research journal. Evidence in EdTech is generated through a collaborative effort between researchers, teachers, and designers, with the child’s learning at the heart.

No single study can reliably measure evidence of the impact of both products and services across contexts and over time. Rather, EdTech needs to aggregate several studies and sources of evidence. This implies measuring both children’s and teachers’ experiences, as well as the learning impact of the technologies’ content and design.

Children’s and teachers’ experiences

Unlike other types of production, EdTech developers often solicit learners’ input into their design. Indeed, a feasible and valid approach to evidence directly involves the knowledge and skills of classroom teachers. Through a participatory research-design process, teachers and EdTech designers co-create an optimal solution for their classroom. In the process, EdTech document and address what users like and dislike, what they need and what makes them engaged in using their product.

patat/Shutterstock
Source: patat/Shutterstock

In our study with the Our Story app, teachers told us they prefer the option of printing children’s stories in various paper sizes. We implemented the printing design feature before launching the app. Similarly, through many observations and discussions with children, we realised that templates and pre-designed stories limited children’s creativity and enjoyment of the story-creation process. We, therefore, made the app open-ended, with no restriction for the type or length of children’s stories.

To claim that their product works, EdTech providers need to combine evidence of user engagement with educational achievement. This is where the usability approach gets combined with evidence of effectiveness and efficacy.

Efficacy and effectiveness

In efficacy trials, the efforts are to understand whether the use of an EdTech works under carefully controlled conditions. Typically, researchers adopt a comparison approach where they compare learning with one EdTech versus another (for example, Our Story app versus another story-making app) or a value-added approach (for example, Our Story with and without added multimedia effects). The aim of an effectiveness approach is to understand how EdTech could work, especially in varied classroom contexts that are not tightly controlled.

An effectiveness approach looks at EdTech’s impact at three levels: the child, the teacher, and the EdTech program. A good study combines several measures to triangulate the data for each level. For example, researchers need observation checklists but also curriculum-based assessment measures and teachers’ assessments.

Ideally, children’s outcomes are measured with independent measures that are integrated into an intervention plan that evaluates a child’s progress over time. Teachers’ experiences are captured through researchers’ observations, teachers’ own reports as well as self-reflection tools. They include teachers’ views on the implementation process, as well as the EdTech’s influence on their practice and their subjective satisfaction levels.

In addition, robust studies measure the overall impact on the classroom environment and community (as the one modelled by the EdTech Hub, for example). Often, not all methods are feasible to be used but an evidence-based approach should incorporate several tools to document the effectiveness and monitor the process.

Ongoing professional development

The human element in EdTech design means that there are many possible factors that influence the desired chain of effects. That is why the best projects combine scientific knowledge with the collective wisdom of teachers and designers. In this process, they need to use a logic model or theory of change to guide the planning of an impact evaluation. This includes considering all the linkages among input, process and outputs, and monitoring and evaluation procedures for verifying the strength of these links.

The more teachers are involved in the process, the more they get insights into how EdTech could work and get familiar with measures to inform their value judgments. Researchers learn from designers and teachers, and vice-versa. Professional development and research go hand in hand, and the gap between EdTech evidence and practice is reduced.

The bottom line: There are different horses for different courses and the best approaches combine several sources of evidence in a transparent evidence portfolio.

References

Wolf, B., & Harbatkin, E. (2022). Making Sense of Effect Sizes: Systematic Differences in Intervention Effect Sizes by Outcome Measure Type. Journal of Research on Educational Effectiveness, 1-28.

Tavares, R., Vieira, R. M., & Pedro, L. (2020). A participatory framework proposal for guiding researchers through an educational mobile app development. Research in Learning Technology, 28.

Kucirkova, N. (2017). i RPD—A framework for guiding design‐based research for i Pad apps. British Journal of Educational Technology, 48(2), 598-610.

advertisement
More from Natalia Kucirkova Ph.D.
More from Psychology Today