UP | HOME

eLearning

Random thoughts from an eLearning professional

Beyond the Redesign Rhetoric: Labour, Power, and the Hidden Costs of AI-Ready Assessment

Illustration of a woman at a desk writing, surrounded by surreal machines, a robotic hand, and a classical building, symbolising tension between education, AI, and institutional control.

Introduction: Redesign Fatigue

“Redesign your assessments.” This phrase has become the default mantra in the post-ChatGPT educational landscape. Conference keynotes echo it, blog posts repeat it, and guidance documents enshrine it. As the repetition grows, so too does the fatigue. For many academics, this is not a new call but a recycled one. It continues to miss a crucial point.

The problem is not simply that assessments need redesigning. The deeper issue is that the capacity to redesign, to reimagine, test, and embed pedagogically rich alternatives, is unevenly distributed, chronically under-resourced, and politically constrained. This is not only a design challenge. It is an institutional one.

Redesigning in the Void: Who Actually Gets to Do the Work?

Beneath the surface of every successful assessment redesign lies an invisible web of labour. Academics carve out hours from already-overloaded workloads. Learning technologists and educational developers translate aspirations into practice. Sessional staff implement unfamiliar models without adequate support.

Yet the calls to “redesign” often ignore this labour, or worse, imply that redesign is simply a matter of personal will or moral commitment. This obscures the material conditions of academic work. Many university staff are burnt out, demoralised, and working in environments shaped more by audit cultures than by educational purpose (Blackmore, 2002).

In Russell Group contexts, the disconnect can be especially stark. Research priorities dominate. Teaching is often undervalued, and meaningful pedagogical innovation is frequently treated as extracurricular (Brew & Boud, 1995).

The Managerial Turn in Assessment Discourse

The arrival of generative AI has been swiftly absorbed into existing managerial logics. “AI-readiness” is framed as a matter of risk management, compliance, and reputational protection. Detection software is procured. Assessment redesign is reduced to an implementation timeline.

In this rush, something vital is lost. Assessment is no longer treated as a pedagogical practice but as a logistical one. The prevailing questions concern scalability, trackability, and verification, rather than enrichment, provocation, or transformation. Educational values are subordinated to operational concerns (Williamson et al., 2020).

This results in a narrow vision of assessment reform, where the dominant actors are platforms and policies, not teachers and students. Design is conflated with delivery. Pedagogy is conflated with risk.

Reclaiming Assessment as a Site of Professional Agency

This direction is not inevitable. Across the higher education sector, there are examples of academics resisting the drift toward technocratic assessment. Some have embraced ungrading or portfolio-based assessment (Blum, 2020). Others have experimented with dialogic annotations, student-led projects, or public-facing scholarship. These approaches foreground meaning-making, relationality, and critical engagement (Hooks, 1994).

However, these examples remain exceptions and they are structurally fragile. They require time, trust, and institutional support. Most importantly, they require that educators are not merely implementers of redesign but co-authors of it.

Assessment, in this context, must be reclaimed as a site of professional judgment and pedagogical agency. It is not simply a problem to be solved but a space to think with students about what knowledge matters and how it is produced.

Towards Infrastructures of Care and Imagination

If we are serious about assessment reform in the age of AI, then we must abandon the belief that design alone is sufficient. Instead, we must construct infrastructures that sustain educators as intellectual, creative, and caring professionals.

This could include:

  • Creating workload models that protect time for collaborative redesign.
  • Funding internal pedagogical fellowships or inquiry groups.
  • Offering alternatives to centralised platform solutions.
  • Including students as co-designers of assessment.
  • Replacing compliance metrics with support for experimentation and reflexivity.

These are not minor adjustments. They represent political commitments. They demand resistance to the edtech industry’s promise of frictionless fixes and a willingness to ask harder, slower questions about the purposes of assessment.

Conclusion: The Real AI Challenge

The real challenge presented by artificial intelligence is not technological. It is ideological. It forces us to confront what assessment has become: a mechanism of control, verification, and efficiency.

At the same time, it offers an opportunity to reimagine what assessment could be: a practice rooted in dialogue, care, and shared meaning. Such redesign does not begin with rubrics or tools. It begins with institutions that are willing to redistribute time, attention, and trust.

Until such conditions are met, the call to “redesign assessment” will remain what it so often is: a slogan shouted into the void.

References