Bridging the Evidence Gaps in EdTech

A Collaborative Effort for a Better Ecosystem

In the ever-evolving EdTech landscape, evidence and evaluation frameworks are essential in ensuring the efficacy, safety, sustainability and trustworthiness of educational technologies. However, despite ongoing efforts, there remain key gaps and challenges in how we gather and apply evidence in this field. Importantly, the topic of how best to support educators and decision-makers in their interpretation of evaluation mechanisms to relate them to local practice often goes unaddressed. To explore these challenges, we have been bringing together diverse stakeholders — from educators and developers to policymakers and researchers — for an open and collaborative discussion on the critical role of evidence in the EdTech ecosystem.

The role of the European Edtech Alliance

The European Edtech Alliance has a goal to create meaningful opportunities for knowledge exchange. By fostering dialogue among those who shape and use evaluation frameworks, we can gain deeper insights into the needs, requirements, and gaps within the current evidence and evaluation landscape. This exchange is vital for several reasons:

  • Increasing visibility:
    First, it helps to increase visibility for these important topics. While evidence-based evaluation is a priority for many within the sector, it is often overshadowed by the pressing need for innovation and scaling, and its importance often gets downplayed. By spotlighting evidence and evaluation frameworks, we can ensure that these considerations are central to discussions about the future of EdTech and evidence-informed decision making.

  • Supporting existing framework developers:
    Second, through these dialogues, we aim to support existing framework developers. Many evaluation systems already in place do excellent work, however, responses from educators, ministries, and strategy lab respondents would suggest that certain aspects are not yet optimal. By identifying where gaps or inefficiencies exist, we can offer targeted suggestions that enhance the robustness and reach of these frameworks. This isn’t about reinventing the wheel — it’s about building on existing systems to ensure they function as effectively as possible for all stakeholders, especially educators.

  • Knowledge exchange: 
    Finally, the knowledge gained from these exchanges will be shared back with developers of evaluation frameworks and key decision-makers. Our goal is to offer comprehensive research that helps address current gaps, enabling decision-makers and evaluation framework developers to mitigate these issues and enhance the overall functionality of the ecosystem. By taking a collaborative, evidence-informed approach, we can work together toward an evaluation system that is transparent, effective, and trusted across the board.

Moving forward

In the long term, the EEA efforts are focussed on creating a better functioning ecosystem that serves all parties involved. The EEA aims to connect the dots between existing and developing evidence practices and does not intend to create an evaluation mechanism. Instead, the EEA intends on actively supporting the evaluation ecosystem: whether you're an EdTech developer looking for robust evaluation methods, an educator seeking to use evidence to make informed decisions, or a policymaker working to implement effective technologies, we believe that by addressing these evidence gaps, we can foster a healthier, more sustainable EdTech environment.

Digital Transformation Event 

As a concrete step toward this goal, we are excited to announce an upcoming event on the 4th of December in partnership with UNESCO’s Digital Transformation Collaborative. This event will serve as a key moment for stakeholders to come together and discuss these issues in depth, identify actionable solutions, and chart a course forward for a more evidence-driven future in EdTech.

Make sure to register, and we look forward to your participation as we work collectively to strengthen the EdTech evaluation ecosystem for the benefit of all!

Previous
Previous

The different types of evaluation frameworks

Next
Next

Sharing Updates from the Global EdTech Testbed/Trialling Network (GETN)