At a recent AI Insights event, our Head of Research, Anna Cermakova, spoke to educators and school leaders about a term now everywhere in education: AI literacy.
Her core message was simple. AI literacy is not just about learning how to use AI tools. If schools reduce it to practical guidance alone, AI use will spread faster than shared understanding, confidence and clarity about what good practice looks like.
This article captures Anna’s key arguments and the discussion that followed. It offers a practical way to define AI literacy, spot what is often missing, and understand what that means for schools.
Anna opened with a tension many schools are already living with.
Almost everyone agrees AI literacy matters. But ask ten teachers, ten policymakers and ten technologists what it means, and you may get thirty different answers. That is not because anyone is being careless. It is because the technology is evolving quickly, and different groups are emphasising different priorities.
Even the foundational questions aren’t settled:
Part of the confusion is that “AI literacy” is being asked to carry multiple meanings at once:
That ambiguity matters. Schools cannot build something coherent if they do not share a starting point.
Anna pointed to a recent UK schools report where half of the surveyed teachers stated they use AI at least monthly, while a third say they’ve never used it at all. Even among those experimenting, confidence is uneven: nearly half report not feeling confident, or describe AI use as uncomfortably close to “cheating”.
At the same time, the direction of travel is clear. Just over two-thirds of teachers said they expected to increase their AI use over the next 12 months.
In other words, schools are navigating a fast-moving shift in tools and expectations without a shared understanding of what responsible, educationally meaningful use looks like in practice.
That makes curriculum questions harder and more urgent. Should AI literacy sit within computing, citizenship, media literacy or safeguarding? Should it appear in pastoral provision? Or should it be woven across subjects? If schools cannot agree what AI literacy is, it becomes difficult to decide where it belongs.

At the event, Anna invited attendees to share what they thought AI literacy should include. The answers ranged from safety and bias to critical thinking, prompting and academic integrity. Rather than producing consensus, the discussion highlighted the challenge: AI literacy is being asked to hold a lot, and different people mean different things by it.
It’s not that schools lack guidance. Anna highlighted a growing body of resources, including UNESCO’s AI competency frameworks for teachers and students, and the UK Department for Education’s guidance on safe and appropriate AI use in education settings, developed with Chiltern Learning Trust and the Chartered College of Teaching.
The difficulty is that these frameworks do not always align neatly. Different organisations emphasise different priorities. The OECD, for example, often places AI literacy within broader “21st century skills”, while other guidance focuses more heavily on safety and appropriate use. Schools can end up with plenty of input, but no single shared frame.
One promising attempt to bring more clarity, Anna argued, comes from the Council of Europe. Better known for its human rights work and the European Court of Human Rights, it has also been developing education-focused thinking on AI. Its starting point is broader than tool training: what do young people need to navigate AI confidently, safely and critically as citizens, not just as users?
From that question comes a three-part definition of AI literacy:
For Anna, that human dimension is the one most often missing. Without it, schools risk reducing AI literacy to either technical explanation or tool training, leaving students underprepared for the ways AI shapes the information they see, the decisions made about them, and the values at stake in an AI-shaped world.
A definition on paper is not the same as literacy in practice. Anna then moved from what AI literacy should include to a deeper question: what is generative AI doing to learning itself?
She drew on work by Rupert Wegerif and Dr Imogen Casebourne at the University of Cambridge, who argue that generative AI can reshape the dialogic space of learning: the space of possibility created when different perspectives are brought together, held in tension and examined.
From this perspective, AI is not a replacement for thinking. It can become another voice in the learning dialogue, offering alternative explanations, generating counterarguments and widening the range of perspectives students can work with.
But it can also narrow thinking if used uncritically. The paper describes AI as a pharmakon: something that can be either remedy or poison, depending on how it is used.
For Anna, the implication for schools is clear. AI can support better thinking when pedagogy is strong. Teachers remain the people who give direction, challenge assumptions, and help students evaluate claims and hold competing perspectives in tension. AI may widen the conversation, but teachers still curate it. And there are parts of teaching it cannot replace: judgment shaped by human experience, or the moment a student needs a few kind words at exactly the right time.
To make this feel less intimidating, Anna borrowed a metaphor from Rose Luckin: “if you can bake, you can understand AI.”
You do not need to understand neural networks any more than you need to understand the chemistry of baking to make a good cake. AI may offer new ingredients, but teachers still provide the recipe: the structure, timing, values and, ultimately, the flavour.
Taken together, Anna’s talk offered a route through the noise.
AI literacy cannot just mean knowing how to prompt, nor simply understanding how AI works. It also has to include what AI is doing to society, rights, trust and wellbeing. Paired with a pedagogy-first approach, that gives schools something practical: a shared language for decision-making, and a clearer sense of when AI supports learning and when it does not.
If this conversation resonated and you’d like to hear Anna’s reflections in full, including the questions discussed during the live Q&A, you can access the recording of her session here.
Updated on: 24 March 2026