The different types of evaluation frameworks
Understanding a comprehensive evidence ecosystem
In the face of rapidly evolving EdTech and the policies and requirements related to their implementation, the need for robust and trustworthy evaluation frameworks has never been greater. As schools and institutions increasingly rely on digital tools, ensuring these technologies are effective, safe, and aligned with educational goals is essential. One way to ensure that these tools meet standards of safety and trustworthiness is to have EdTech go through evaluation or certification processes. There are a number of evaluation mechanisms already in place or in development globally, which aim to address these issues.
A preliminary mapping of the EdTech ecosystem conducted by the European EdTech Alliance indicates that quality assurance frameworks can generally be categorised into overarching types, which are driven by distinct motivations, led by different stakeholder groups, and produce different outcomes. Each of these different categories of evaluation arises from different needs and incentives, addresses different stakeholders in different ways and with different purposes, aims, and objectives.
For example, frameworks addressing the needs of testing research hypotheses or furthering academic research into EdTech, often led by research focussed institutions, result mostly in studies that can be targeted in scope, and explore the impact and effects of technologies or aim to answer a current gap in existing knowledge. The results, although important, can be misaligned with the informational needs and requirements of other stakeholders within the education ecosystem who may be trying to interpret them. See the figure below for examples and more information about the types of frameworks identified.
Because of these differences, each type of framework offers unique strengths and plays a crucial role in shaping the EdTech ecosystem. Together, they form an extensive foundation for the evaluation of EdTech, which covers research, policy, commercial validity and user experiences, and collectively, they support both the development of high-quality, impactful educational technologies, and evidence-based decision making.
Current gaps in the evaluation landscape
Drawing from global studies and collaborative workshops across Europe, it would seem as though there are several key gaps in the current evaluation landscape. The differences in motivation, objectives and potential outputs mean that the framework types can often operate in silos and/or not adequately engage all key stakeholders on their own. Additionally, the variations between types of frameworks can lead to inconsistent or incomparable results that aren’t always translated into practical and appropriate terms for all stakeholders. In particular, educators, learners and procurement decision-makers seem to often not be adequately addressed in communication measures.
For example, public-sector led evaluation systems can be based in research, provide vital commercial legitimacy, but often assess more technical components of EdTech with a need to ensure operationalisation and safety of systems at scale. These evaluations, whilst vitally important for the ecosystem, can be hard to interpret in a meaningful way for educators or school procurement agents, who are looking for information that can be more practical and related to their specific implementation environment, e.g. how a certain product works for a specific subject or grade level and integrates with specific existing systems in their school environment. By the same token, the legitimacy of the different types of evaluation frameworks are sometimes called into question by various stakeholders. In this way, for example, participants in the strategy labs workshops identified that the certifications led by private sector organisations are often called into question by research institutions and policy initiatives for their lack of impartiality or trustworthiness based on perceived motivations of the certifying organisations.
The distinctions between the needs of stakeholders and the outputs of the different types of frameworks can make it challenging for educators and policymakers to navigate the vast array of EdTech solutions effectively, for developers to aptly ensure their solutions adhere to meaningful methods of validation and evidence requirements, and for researchers to align core research needs from the ecosystem with essential research questions of methodology.
For example, there can be a lack of acceptance or adoption of these frameworks by local policy or decision makers and, although some promising frameworks are under development, they are often not yet thoroughly articulated, may only currently address certain areas of evaluation or one focus point from a motivational perspective, or can neither prove wide-spread acceptance from all relevant stakeholder groups, nor show implementation at scale. Additionally, there seems to be a persistent issue with communication towards different stakeholder groups, which could hinder implementable or actionable outcomes.
Towards a comprehensive ecosystem of evaluation
Taking into account the differences in school systems and an understanding of culturally different approaches to both evaluation and regulation, a single, standalone and comprehensive framework may not actually be what is necessary in specific markets or for achieving specific goals. Alternatively, a comprehensive evaluation framework could be one that constitutes subcomponents answering the relevant needs and output requirements, and which communicate with each other. This point becomes important as, currently, the different types of existing certification and review practices rarely sufficiently interact with each other. It will be important to increase the knowledge exchange opportunities between framework types, as well as the visibility of their work.
Looking ahead, it will be necessary to create more systematic connections between these various types of evaluation frameworks in order to develop a comprehensive ecosystem of evaluation, and to ensure that all stakeholder needs are met.
Upcoming actions and ways to be involved
Through ongoing interviews and workshops, the EEA aims to support existing and developing frameworks in their knowledge and experience exchange with a goal of increasing evidence-informed decision making. See more about the EEAs future role in this work here.
Additionally, join the EdTech Strategy Lab and the Digital Transformation Collaborative at UNESCO headquarters in Paris on December 4, 2024, to understand key findings from the past year of active research and stakeholder engagement. Come together with over 100 participants for this one-day session where we look into evidence-informed evaluation of EdTech and its integration into the actions of all stakeholders, from development to policy implementation and classroom practice. Register your interest here!
Cherner, T. & Mitchell, C. (2021). Deconstructing EdTech frameworks based on their creators, features, and usefulness. Learning, Media and Technology, 46 (1), 91-116.