• Cover
  • 9 de March de 2026
  • No Comment
  • 27 minutes read

Mary Burns: “«Prosper, Prepare, Protect». We need a new compass for Tech Education in the AI Era”

Mary Burns: “«Prosper, Prepare, Protect». We need a new compass for Tech Education in the AI Era”

Mary Burns./ Photo courtesy of the author

 

License Creative Commons

 

Antoni Hernández-Fernández

 

It is a privilege to welcome Mary Burns, a leading global authority on teacher professional development and educational technology. In 2021, Mary authored the foundational framework for UNESCO’s 2023 GEM Report, providing a rigorous look at technology as a tool for learning and delivery.

Today, five years after that pivotal document and now as a key voice, Mary continues to guide the global conversation through her authorship of Brookings Institution reports like ‘A New Direction for Students in an AI World’. We are here to discuss how the landscape has shifted and what it means for the future of the classroom.

 

Reflecting on 2021: Five Years of Transformation

Your 2021 background paper for UNESCO was written as the world was emerging from the pandemic, focusing heavily on hardware, software, and basic digital literacy. Five years later, looking at the meteoric rise of Generative AI, how have the priorities shifted? If you were to rewrite that report today, what would be the most significant addition or change?

If I were to write that report today, the most significant addition would be the failure of distance learning to offer commensurate or even marginal quality learning experiences to many (most?) students across the globe. This is a systematic failure, not just a technology failure, and I actually do write about this in a 2023 publication called “Distance Education for Teacher Training: Modes, Models and Methods.” We are still paying the price in terms of learning loss among students who had access to no education at all and among many learners (usually poorer ones) who had limited access to technology or access to online learning that was simply inferior to the face-to-face variant. But the biggest change is of course generative AI. In fact, there was some information about generative AI in the 2021 think piece but at that time, based on everything I was reading and hearing, GenAI was a long way off. November 2022, and the commercial release of ChatGPT, was a watershed moment in edtech.

And a lot has happened since 2022…

The focus now in 2026 is all AI all the time or as I like to say, AI is everything everywhere all at once. It is different from any technology we’ve seen before in education. But the issues around AI are similar to those I wrote about technology in general in 2021–issues such as quality, equity, evidence of impact, hyperbole, the gulf between the rhetoric from tech companies and the lack of measurable impact thus far, the herd mentality and pressure for its adoption in education systems, and the implementation and research gaps.

One difference is that we may have reached peak edtech the way I think we’ve reached peak social media. There is far more scepticism now in the education community, in some cases backlash, against educational technologies and there is enormous scepticism about AI in education–and AI companies and entrepreneurs themselves–including by educators who are power users of the tools and platforms. I believe that students should have access to technology in school. At the same time, I believe this pushback is a good thing because we need reflective discernment about the technologies we own, use and deploy with children. I hope this re-examination of technologies filters down to unexamined us of AI in education and filters up to the governments who seem loathe to impose any type of regulation or guardrails on AI use in education.

 

The “Prosper, Prepare, Protect” Paradigm

In fact, in your recent work with Brookings, you’ve introduced the pillars of Prosper, Prepare, and Protect. Can you explain briefly this paradigm?

These pillars encapsulate responsible and productive AI use. They came from interviews and research. Our interviewees spoke hopefully of AI helping students thrive but also about the need to prepare students and teachers to use AI ethically and responsibly and of the need to protect students from its harms. Cumulatively, these pillars acknowledge that responsible AI integration requires simultaneously maximizing benefits, building capacity, and minimizing harm—a balanced approach that moves beyond either the techno-optimism that technology companies and edtech advocates have been accused of or techno-scepticism (like the final Global Education Monitoring Report) toward techno-realism which I hope is the tone and tenor of the Brookings report.

  • Prosper focuses on how students can leverage AI to enhance their learning, creativity, and academic growth—essentially the opportunities for flourishing.
  • Prepare addresses the competencies, literacies, and critical capacities students need to develop to navigate an AI-saturated world effectively and ethically.
  • Protect encompasses the safeguards, boundaries, and awareness necessary to mitigate risks ranging from overreliance and deskilling to privacy concerns and misinformation.

Why do you feel that ‘Protecting’ foundational learning has become so much more urgent in 2026 than it was back in 2021?

The “protect” pillar is critical in the age of AI. Technology, as I wrote in 2021, has long posed discrete harms to users (young people) in terms of identity theft, surveillance, doxxing, online safety, and exposure to inappropriate content. AI didn’t cause these harms, but it is certainly amplifying them. Further, the threats AI poses to children and teens operate at a much more foundational and existential level than non-AI harms because AI threatens the very processes through which children grow and develop as thinking, feeling, social and trusting beings. More than any other technology, AI makes offloading, or foregoing developmental experiences for convenience so seductive. And here the harms are more pervasive than with other technologies.

Why the harms are more pervasive than other technologies?

When children and teens offload their thinking to AI tools and platforms, they bypass the cognitive effort that builds knowledge, critical thinking and intellectual capacity. When they offload their emotions to AI avatars and algorithms, they bypass the emotional labour critical to managing their internal lives. When they offload social interactions to AI, they bypass the messy but qualitatively superior interactions for the easier, frictionless and qualitatively poorer interactions with algorithms. When they defer trust from human beings to AI, they short circuit the relational trust so necessary for believing in others, in facts, in the competence of others, and indeed in their own judgment. This threatens both efficacy and self-efficacy, which are foundational to teaching and learning. It threatens epistemological trust in knowledge, experts, institutions and the education system itself and I worry that it pushes them closer to cynicism and nihilism.

Oof, not an easy world…

These harms are foundational to human development and societal development. Therefore, I think it’s incumbent upon governments and technology companies to protect students from these harms so AI use in education is intentional, productive, ethical and responsible.

 

AI as a Tutor: The Productivity vs. Learning Trap

You’ve recently analysed how AI-enhanced tutoring can match human success. However, you also warn about “AI-diminished learning.” How can we ensure that AI remains a “thinking partner” for the student rather than just a “productivity tool” that bypasses the difficult—but necessary—process of learning and struggling with a concept?

This is a technical, regulatory, pedagogical and human challenge. First, technology companies can change the design of their student-facing platforms. They can de-anthropomorphize them to minimize constant engagement (except maybe for tutoring platforms), make them less sycophantic so they challenge students’ beliefs. They can be designed, as we argue in the Brookings report, to empower curiosity, deepen understanding, disclose information progressively, and promote responsible use, essentially to teach not tell.

AI that teaches, not tells, encompass tools and platforms specifically designed for kids that enrich student learning. They use vetted, factual curated content not content from the open web. These platforms don’t dump information, like general purpose AI platforms, they scaffold, ask questions, disclose information progressively so they are guiding students and so the student, not the AI is in charge. They limit interactions and build in guardrails to avoid conversational drift. The best example is probably Khanmigo. There are others, like ChatGPT’s Edu version of its LLM.

So, Is AI going to be a “thinking partner” in these platforms?

If AI is going to be a “thinking partner,” it has to enable research-proven learning science that maps onto the brain’s cognitive architecture: prior knowledge activation, working memory management, retrieval practice, interleaving (mixing different, related topics or problem types within a single study session), elaborative interrogation, meaningful feedback loops, and worked examples of math problems not just answers but the step-by-step process by which problems are solved). These can’t be optional; they have to be fundamental structural features of how the AI-platform-student interaction unfolds. And one of the best ways to do this is involve teachers and students meaningfully in the design of these applications.

But, designing AI platforms this way is difficult and counterintuitive for the AI because it has to resist its own efficiency pressures. This is why a lot of “educational” AI platforms focus on engagement and time on task and then conflate these metrics with learning. It won’t happen unless AI companies are forced by government regulation and education systems to make these products truly educational. This involves regulation, which in the US, we do not have—indeed The U.S. Department of Justice created an AI Litigation Task Force last month to sue states whose AI laws the administration considers too burdensome. This is a clear federal usurpation of state powers.

Absent regulation, education districts–states, provinces, countries can create their own standards of educational AI and demand that AI companies conform to guidelines—and they can band together, using their procurement powers, to make purchase of these tools contingent upon design of these standards. They can create age limits on AI, as is happening now with social media, which tech companies must manage.

And, what about teachers and pedagogy?

Teachers need to foster a kind of systems thinking about AI. They must help students use AI tools for guided practice while ensuring that learners tackle similar problems independently to reinforce skills. They have to help students understand where AI can be helpful in augmenting learning and the risks it poses in automating learning. They have to help students interrogate rather than accept generative outputs. This means cultivating critical synthesis instead of passive consumption, ensuring that students engage AI through reflective practice that preserves the deeper understanding essential for interdisciplinary learning.

This is a pedagogical challenge…

Teachers must model productive AI use while simultaneously teaching students to discern quality, relevance, and bias, offering these critical capacities as necessary counterweights to the seductive efficiency of these tools that deliver ready-to-use information that is often misleading, incomplete, or fundamentally inaccurate. The pedagogical challenge is not to reject AI’s utility but to develop in students the intellectual discipline to use it without surrendering the cognitive work that constitutes genuine learning. All of this will look different across education systems, subjects and age groups.

People in general, not only students, has a personal relationship with technology…

Of course. We need to interrogate our own relationship with technology. Our attitudes towards technologies like AI are full of contradictions, many of which we cannot be smoothly resolved. We love technology because it makes tasks so much easier for us, and it often diverts our attention from thoughts and tasks that are cognitively and emotionally challenging. Yet AI technologies represent an immense force of change that may transform the world in ways no one envisions or wants.

But, how to manage this?

I’m quoting from something I wrote here but I firmly believe this: We need to find a better way of managing our relationship with technology that balances priorities as human beings. We need to keep our “eyes wide open” and act with agency and equilibrium when using potentially transformational tools such as GenAI. We can embrace artificial intelligence with enthusiasm while simultaneously advocating for discernment in its application. We can appreciate its potential benefits without relinquishing our control over its decisions. We can use it as a valuable tool, support, and scaffold without becoming dependent on its use or its surrogate. We can be eager and informed users of technology while prioritizing the humanity and human element of education. But to do all of this, we must remain vigilant, not only mindful of what we stand to gain from the use of generative AI but also what we stand to lose.

 

Teacher Agency in an Automated Age

In 2021, the challenge was helping all teachers use technology. In 2026, the challenge seems to be preventing technology from replacing teacher agency. How can educators maintain their authority when algorithms can generate lesson plans and grade essays in seconds?

The decline of teacher agency in the age of AI is a real concern, even if the research base on this specific remains underdeveloped. In early 2024, I authored a book chapter for the University of Lisbon titled “Eyes Wide Open: What We Lose from Generative AI,” in which I argued that adults — university faculty, teachers, and myself included — risk outsourcing our cognitive labour and professional agency to Large Language Models in exchange for time savings. This is not a problem unique to students: as humans, we are hardwired for shortcuts, the path of least resistance, and cognitive offloading. This vulnerability is confirmed by my conversations with teachers themselves for the Brookings report. Many of them openly admitted struggling with their own over-reliance on AI — a pattern also documented in recent reports on AI use in education.

But teachers have increasing social pressures, and no time…

A significant predictor of student cognitive offloading is time pressure, and there is every reason to believe the same holds true for teachers. Despite the widespread misperception that teachers work fewer hours than other professions, studies from both the United States and the United Kingdom show that teachers work substantially more hours on average than non-teachers. The majority of teachers globally are women, many of whom also carry disproportionate child-rearing and domestic responsibilities. This convergence of professional and personal pressure creates precisely the conditions under which cognitive and professional offloading becomes not just tempting but rational.

And then AI is an easy shortcut…

The consequences of outsourcing the cognitive work of teaching to generative AI, however, are significant. Instructional planning, reflective practice, and personalized preparation are not incidental to teaching — they are constitutive of professional identity and expertise. When teachers relinquish this intellectual labour, they risk what can only be described as a form of alienation in the Marxian sense: a fragmentation of the relationship between the worker, their work, the beneficiaries and themselves. Specifically, outsourcing instructional planning to an AI platform risks alienating teachers from the product of that planning, from the personal investment and iterative thinking embedded in the process of planning, from the collegial dimensions of co-planning with grade-level or departmental peers, from colleagues who still design what we might call “artisanal” lesson plans without AI assistance, from the individual needs of their students, and ultimately from their own professional potential. As dependence on AI tools deepens, teachers increasingly relinquish control of the very tools meant to serve them, becoming further disconnected from the elements that constitute meaningful agency.

So should we stop using AI in teaching?

To be clear, I’m not arguing against AI use in teaching. AI deployed as a genuine co-creator — one that extends teacher thinking rather than replacing it — holds real promise. AI use that is measured and whose outputs are interrogated, revised, and repaired can be helpful and insightful—almost like having a (digital) teaching assistant. The concern is wholesale substitution: the quiet, incremental transfer of core professional responsibilities to AI systems in ways that go unexamined and unchallenged.

What is the role of educational institutions?

Educational institutions play a critical role in terms of teacher agency. They should affirm clearly and consistently that AI tools are designed to amplify and support what teachers do, not automate it. They should invest in sustained professional development in both AI literacy and instructional design and protect teacher time through intentional scheduling and workload policies. AI in education should not be something done to teachers–but with them. They must have a substantive voice in the design of educational AI platforms and all procurement and governance decisions around AI use in schools — not as tokenism or as a formality, as is often the case. Most fundamentally, as a society we need to renew our commitment to professionalizing and validating teaching as a worthy intellectual pursuit, one whose identity is grounded in judgment, relational knowledge, and ethical care.

Ethical responsibility, something crucial

Teachers have a responsibility here: to reflect honestly on the degree to which they are using AI, and to be transparent about it with their colleagues and students. This kind of reflection is most sustainable when it happens within a community of practice, where teachers can support one another in navigating the genuine tensions between efficiency and professional integrity — and in doing so help one another maintain professional and personal agency.

 

Beyond the “Noise”: Identifying Real Value of Technology

You have often spoken about the “noise” in EdTech. Today, the noise around AI is deafening. Based on your research, what is one “signal”—one specific application of Generative AI—that you believe offers a genuine, evidence-based breakthrough for students in under-resourced settings?

When it comes to generative AI in education broadly, we are still confounding noise for signal, and mistaking harm for help. But perhaps the strongest signal to date is being emitted by AI tutoring. Traditional Intelligent Tutoring Systems (ITS) long predate generative AI and have drawn on symbolic AI, predictive analytics, and, later, machine learning. These systems are supported by a substantial and methodologically rigorous evidence base demonstrating positive effects on student learning—including for struggling learners and in low-resource contexts such as India, rural China, El Salvador and Sub-Saharan Africa by expanding equitable access to educational support that was previously available only to those with greater resources.

Therefore, can these AI-based support systems help and be effective in these contexts?

I write about these both in the GEM think piece and in my 2023 publication Distance Education for Teacher Training: Modes, Models and Methods. The effectiveness for these tutoring systems derives from their design—incorporating effective tutoring principles—versus technology for tutoring per se. The problem is that much of the current research and commentary on “AI for tutoring” conflates generative AI with these earlier systems, thereby misattributing a well-established body of evidence to generative AI specifically. These are not the same technologies, and the distinction matters enormously for how we interpret claims about what works.

But is there any scientific evidence to support this?

Having said that, there is a growing body of high-quality research — from institutions including the World Bank, Stanford, Harvard, and Google — suggesting that when established tutoring programs integrate generative AI, or when tutoring systems are built on top of large language models, they can yield meaningful learning benefits, including and in some cases for more disadvantaged learners. Right now, most reported gains are short-term, task-specific, and context-dependent rather than longitudinal or system-wide, so we have to be careful about causal attribution and generalizability.

This is undoubtedly a complex problem

But either as part of an ITS or as a distinct platform, generative AI can bring affordances that represent a meaningful departure from what earlier rule-based systems could offer — and it is here that the conflation problem I raised above begins to matter in a productive rather than cautionary sense. GenAI’s capacity to produce natural language — and even some of the characteristics I have critiqued elsewhere, particularly its tendency toward anthropomorphic interaction — can be educationally productive when deliberately designed.

Any specific examples?

I’ll use my own. In preparation for work with a group of teachers in Italy, I designed a historical chatbot that doesn’t respond to student answers but rather uses famous Italians to prompt students to share what they know and then follows up with clarifying questions. The tone is friendly, conversational but professional.

This conversational and anthropomorphic design can provide some degree of Socratic dialogue and deliver personalized feedback that feel responsive to learners. Like my chatbot, LLMs can generate naturalistic dialogue, such as explanations tailored to individual student questions, rather than selecting from pre-scripted responses typical of rule-based tutoring systems. Students can ask follow-up questions in natural language and receive contextually appropriate answers, while AI systems can provide sophisticated feedback on open-ended responses, particularly in domains like writing or mathematical problem-solving. This addresses one of the historical limitations of rule-based tutoring systems, which struggle with student queries that fall outside their predetermined pathways. Finally, generative AI tutoring systems create a psychologically safe space for students to learn. AI platforms don’t get impatient, they are not judgmental, and students can ask questions without shame or embarrassment. The individualized nature of tutoring confers a degree of privacy from peers, while the asynchronous nature of AI tutoring allows students to engage with material at times when they feel most ready to learn.

Research conducted in specific instructional contexts — particularly studies focused on writing feedback — has found that students receiving AI-generated feedback–especially L2 learners, those who speak something other than the language of instruction– can perform as well on measured outcomes as those receiving traditional teacher feedback. These findings suggest rough equivalence — not superiority — under controlled conditions, and they should not be overgeneralized. The more compelling implication is not that one form of feedback should replace the other, but that in terms of tutoring, students may benefit from strategic hybrid feedback that combines human-designed pedagogical scaffolds with AI-generated adaptive learning and feedback.

 

The Equity Paradox of 2026

After the coronavirus pandemic, we talked about the “digital divide” in terms of internet access. Today, we face an “AI divide”.

Technology has always promised inclusion for the world’s poorest students but has typically delivered exclusion. Unfortunately, research from large-scale ed-tech deployments in contexts like Sub-Saharan Africa demonstrates that technology alone rarely closes learning gaps—and frequently widens them, so I am not hopeful that the AI divide will be narrowed anytime soon. My pessimism derives from the reality that digital and educational inclusion cannot happen without the most fundamental supports: access to a device, internet, software, and a caring, well-trained teacher. All of this must be situated within an education system that is responsive to the needs of the communities most often excluded from technology provision. These are not technical prerequisites — they are political ones.

How do we prevent a future where students in wealthy nations get human-led education supported by AI (as you said at the 2026 IFE Conference, paid generative AIs are what increase reliability), while students in the Global South receive “automated-only” schooling, or they use unreliable generative IAs?

Designing for equity and quality is above all a political choice. It involves publicly and meaningfully prioritizing equity and quality within an education system; deliberately directing resources and investments to marginalized communities; developing initiatives tailored to local needs rather than imposing one-size-fits-all solutions; creating standards, helping educators attain them, and holding systems accountable for doing so; and consulting and collaborating with communities rather than imposing external diktats. This is all compounded by the excessive costs of AI processors and reliable, accurate models. I don’t entirely blame AI companies — their development costs are so high that they focus their products on countries and systems that can pay for them — but the consequence falls hardest on those already left behind.

Is it the responsibility of the governments of the countries?

Closing the AI divide requires countries to adopt a stance that equity extends to all human beings. Inclusion cannot be conditional on capacity that has never been built so rather than applying readiness criteria that effectively exclude marginalized communities from AI transformation, a focus on equity would mean supporting under-resourced education systems, schools, and communities through targeted policies and strategic resource reallocation. There are initiatives attempting to address this such as the United Nation’s Development Programme’s Hamburg Declaration on Responsible AI for the Sustainable Development Goals.

But you’re pessimistic…

Yet many countries simply lack not just the will, but the know-how and resources to do any of this, hence my pessimism. Many low-resource governments and education systems may be able to diversify AI and education funding through innovative approaches that create sustainable infrastructure while building local capacity and ownership (for example, Innovative Financing for Education (IFE) mechanisms that don’t simply wait for national budgets to expand but actively construct new funding pathways that keep equity and community ownership at the centre). But many more will not. And the poorest parts of the globe may be able to attract donor interest to make any of the above solutions possible.

And, what about the AI teachers in these contexts?

To address your specific focus on AI teachers, technology has a long history of providing access to learning for communities lacking quality in-person teaching. Interactive Audio Instruction, live radio tutoring, and television classes like Mexico’s Telesecundaria + program have scaled learning opportunities to communities with no teachers; delivered national curriculum to learners in remote areas; and supported community volunteer teachers who have little or no training. A qualified, skilled human teacher is preferable to a technology one, but an AI teacher is better than nothing. In the poorest communities and schools across the globe that lack external financial support, particularly given the massive global teacher shortage, the real tragedy is that students may well have access to nothing.

Finally, Mary, It was a pleasure to listen to you in Monterrey, and now to have been able to delve a little deeper. Thank you from the readers of Educational Evidence.

Thank you! This has been a real honour.

Mary Burns reminds us that while technology has changed more in the last five years than in the previous twenty, the core of education remains a human endeavour. Her call to ‘protect’ the student’s cognitive development in an automated world is perhaps the most important lesson for teachers.


References:

Burns, M. (2021). Documento de referencia preparado para el Informe de seguimiento de la educación en el mundo de 2023, Tecnología y educación: La tecnología en la educación. UNESCO: https://unesdoc.unesco.org/ark:/48223/pf0000378951_spa

Burns, M. (2023). Distance Education for Teacher Training: Modes, Models, and Methods. Washington: EDC. http://go.edc.org/07xd

Burns, M. (2024). Eyes Wide Open: What We Lose from Generative Artificial Intelligence in Education. In:

Burns, M. (2026). What the research shows about generative AI in tutoring. https://www.brookings.edu/articles/what-the-research-shows-about-generative-ai-in-tutoring/

Burns, M. Et al. (2026). A new direction for students in an AI world: Prosper, prepare, protect.  https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/


Source: educational EVIDENCE

Rights: Creative Commons

Leave a Reply

Your email address will not be published. Required fields are marked *