Reclaiming Pedagogical Agency in the Age of AI Realism
The recent conversation between Helen Beetham and Audrey Watters, featured on the Imperfect Offerings podcast, offers a timely and incisive critique of “AI realism” in education. Their dialogue resonates strongly with the concerns I have raised in my academic work and blog posts around critical pedagogy, authentic assessment, and student-centred learning. As educational technologists and practitioners, we must resist narratives of inevitability that position generative AI as both ubiquitous and unchallengeable. Instead, we should foreground pedagogy, ethics, and relationality in shaping digital futures.
AI as Ideology, Not Just Technology
Beetham and Watters remind us that AI realism is not merely about accepting a new toolset; it is an ideological posture that emerges from and sustains deeply embedded structural inequalities. Their framing of AI as “the water in which we are swimming” parallels my own critique of how educational technologies often become naturalised and invisible, even as they constrain authentic, dialogic forms of learning. The integration of AI into assessment, content creation, and student monitoring is not neutral. It represents a shift towards standardisation, surveillance, and performativity—logics that stand in direct opposition to critical and student-centred pedagogies.
The Erosion of Authentic Assessment
My work has consistently advocated for assessment practices that are situated, reflective, and co-constructed with learners. Watters’ concerns about Turnitin’s AI-infused writing environments—tools that capture the entire composition process for evaluative purposes—highlight the increasing colonisation of student thought by data-driven systems. This reflects a move away from formative, exploratory learning towards a regime of optimisation and normativity. In such a context, assessment risks becoming a performance of compliance rather than an opportunity for meaningful intellectual development.
Critical Consciousness and Coerced Adoption
A key theme in the transcript is the dual consciousness experienced by educators and students alike. On the surface, there is an institutional imperative to adopt AI; beneath that, a simmering disquiet about its implications. This mirrors my own observations of the tensions faced by academics who feel pressured to conform to edtech trends while privately questioning their pedagogical value. AI realism, as Beetham and Watters articulate, becomes a coping mechanism—a way to endure systems that are already in motion rather than challenge their direction. Yet it is precisely in these moments that critical consciousness must be fostered.
Resisting the Inevitable
What Beetham and Watters articulate so powerfully—and what I have argued in my own work—is that resistance is not futile, but necessary. They call for spaces in higher education that protect epistemic agency, mental wellbeing, and the right to be unseen by extractive technologies. Their vision of rewilding and rematerialising learning environments aligns with my belief that education must make room for ambiguity, process, and co-authorship. These are not nostalgic retreats to pre-digital forms, but urgent pedagogical interventions in an era of accelerating automation and enclosure.
Conclusion: Reclaiming Our Futures
Beetham and Watters’ discussion challenges the myth of technological determinism. AI is not destiny; it is a product of choices—economic, political, and institutional. As educators and designers of digital learning, we have a responsibility to foreground human agency, ethical deliberation, and educational justice in every decision we make about technology. If AI realism is the dominant story, then we must become better storytellers—offering counter-narratives that affirm education as a deeply human, relational, and transformative endeavour.