- Cover
- 30 de January de 2026
- No Comment
- 10 minutes read
On the Framework for Teachers’ Digital Competence in AI

Detail from the cover of the document Teachers’ Digital Competence in Artificial Intelligence. / Government of Catalonia. Department of Education and Vocational Training.

Artificial intelligence (AI) entered schools without passing through the head teacher’s office. It did not ask permission from staff meetings, nor did it wait for a decree. It is simply there—in classrooms, in students’ homes, and on their digital devices. Faced with this reality, and within an ecosystem where frameworks and recommendations proliferate to the point that information overload becomes disinformation, the Departament d’Educació i Formació Professional of the Generalitat de Catalunya has made a clear effort to organise, define and delimit concepts and areas of action, while also erecting a form of normative, competence-based guardrail. This is how the Competència Digital Docent en Intel·ligència Artificial framework came into being.
It is not a curriculum. It should not be an imposition. For the time being, it is not a requirement. It is a framework: a new guide and a statement of intent. That much should be acknowledged upfront. It is also, it must be said, a clear attempt to cool down techno-utopian or techno-solutionist enthusiasms before they do more harm than good. And it will very likely irritate technophobes as well.
One central idea runs through the document from beginning to end: AI is not neutral. It is not (merely) an innocent machine, nor just another digital tool (Laba, 2025). It is a general-purpose technology with the capacity to shape decisions, educational processes and human relationships. What the document perhaps cannot do—because it is an official text—is openly challenge the powerful economic interests that lie behind the AI spectacle. Hence the need to train teachers. Be careful not to become the product yourselves. But this training needs to be genuine, not merely rhetorical or based on vague generalities.
What the framework actually proposes
The proposal is structured around four broad dimensions: understanding and safe use of AI; ethics and a humanistic perspective; pedagogy with AI; and professional development. These are organised across three progressive levels—basic, intermediate and advanced. A scale designed so that every teacher can find their place without feeling pushed out of the system? After all, primary education is not the same as secondary education, nor is it the same as vocational education and training.
The references are explicit, clear and recognisable: UNESCO, European competence frameworks, DigCompEdu. The document does not invent anything new, but it does synthesise—a considerable task, given the volume of material—and adapts extensively. At times it reads as a careful translation of international frameworks into the Catalan context, with an institutional and normative emphasis. The prevailing tone is one of caution.
Strengths of the model
In my view, the first major strength is conceptual: separating the analysis of AI from general teachers’ digital competence. Not everything boils down to “knowing how to use tools”. AI introduces risks, opacities and ethical dilemmas on a scale that did not previously exist. Treating it as a mere artefact or a minor extension would have been a serious mistake. That said, the document will need to be more clearly integrated into the broader teachers’ digital competence framework. Every good analysis ultimately requires synthesis.
The second strength is its almost militant focus on safety—as it should be. Data protection, algorithmic bias, hallucinations, transparency, accountability. The document insists that ultimate responsibility always lies with a human being—within educational institutions, at least. In an age of uncritical automation, this is a clear stance. That said, anthropomorphising language often creeps in (we should remember that AI does not “think” or “hallucinate”), and this needs monitoring. To address this, the document includes a final glossary—evidence of a genuine linguistic and didactic effort.
The third strength is its positioning of ethics at the very core of the discourse—not as a decorative appendix, but as a structural dimension. Human agency, critical thinking, teacher supervision, limits to delegation and cognitive offloading. The message here is unequivocal: AI may assist us, but it cannot think for us.
The fourth strength is pedagogical. Before talking about tools, we must talk about learning objectives. Before discussing automation, we must establish didactic and professional criteria. AI does not rescue poor teaching practices; it makes them more visible. It is not a miracle, nor will it solve everything. It is not a miracle, nor will it solve everything (Lara y Magro, 2025).
Finally, the document has the virtue of not presenting itself as definitive. It declares itself open, revisable and updatable—something of a rarity in official documents, and a welcome gesture of intellectual honesty.
The inevitable shadows
Not everything, however, is a strength. The first problem is density. The document is somewhat long, conceptually demanding for the uninitiated, and packed with terminology: glossaries, indicators, overlapping competence frameworks. In many schools there will simply not be the time or energy to digest it. It may feel overwhelming. Some clear visual schemata help—perhaps even a Pokémon-style evolutionary graphic, reminiscent of the famous “little candy” from the teacher’s digital competence framework.
The second problem is the lack of obligation. Without accreditation, clear incentives or a direct link to a defined professional career path, the risk is obvious: that only those already convinced will read it—or those forced to do so by circumstance.
The third problem emerges, in my view, at the advanced level. Mentoring, institutional leadership, tool creation, protocol design—all perfectly reasonable in theory, but deeply unrealistic without dedicated time, resources and explicit recognition. The framework presupposes a teacher with structural leeway, real opportunities for progression (will advanced levels entail reduced teaching loads?) and sufficient personal time—conditions that often simply do not exist. There is a utopian note here.
The fourth problem is an excessive reliance on individual training. Without a robust systemic architecture, responsibility once again falls on the same figure: the motivated teacher. And that is exhausting. People get tired; the principle of least effort is a very human behavioural tendency, as George Kingsley Zipf already observed (1949).
There should, moreover, be a right not to use AI—a form of conscientious objection, if only because reducing its use, like that of other technologies, is beneficial for the planet. Each professional should decide within their own specialism. Technology subjects and vocational training are not the same as other disciplines or educational stages.
A final message
Linked to the above, it is true that the document does not say “use AI”, but rather “know what you are doing when you use it”. It promises no miracles. It warns of risks. It does not sell the future; instead, it attempts to regulate a highly diverse present. Perhaps this is its greatest value. In a context of technological euphoria, this framework does not clap enthusiastically. It observes. It does not accelerate. It slows things down slightly. And in education, this is not conservatism—it is responsibility.
Finally, there is a paradox. We speak at length about pedagogy with AI within an education system that still lacks, in any generalised sense, robust, auditable, home-grown educational AI infrastructures. Will we be able to work, for example, with our own local, free and open-access language models? Will we be able to free ourselves from the large corporations that are beginning to engulf the education system? This requires vision—long-term perspective. For now, there is very little operational reality pointing towards independence from multinationals.
Teachers, you have reading material for reflection. Perhaps for Easter, when we may have more time. As a form of penance for dataism.
References:
Generalitat de Catalunya (2025). Competència digital docent en intel·ligència artificial. https://educacio.gencat.cat/web/.content/home/departament/publicacions/monografies/mon-digital/competencia-digital-docent-intelligencia-artificial/competencia-docent-ia.pdf
Laba, N. (2025). AI is not a tool. AI & Society. https://doi.org/10.1007/s00146-025-02784-y
Lara, T. y Magro, C. (2025). IA y educación. Una relación con costuras. Madrid: Trama Editorial. Biblioteca Digital Journey.
Zipf, G.K. (1949). Human Behavior And The Principle Of Least Effort. Cambridge, MA: Addison-Wesley. https://archive.org/details/in.ernet.dli.2015.90211
Source: educational EVIDENCE
Rights: Creative Commons