UP | HOME

eLearning

Random thoughts from an eLearning professional

When AI Breaks the Exam: Reflections from a Programme Review in Online Education

Retro-futurist scene: armoured cyborg seated in a chair wired to machines, a luminous figure reaches toward him, and a giant brain in a dome above a neon city and stars.

Introduction

This post reflects on a recent review of a fully online, distance-taught postgraduate programme. The review was prompted by a growing concern shared by the programme team: how sustainable are our existing assessments, particularly online exams, in a context where Generative AI tools are now widely available to students?

The concern itself is not unusual. Across higher education, AI has sharpened anxieties around academic integrity, authorship, and assessment security. What this review revealed, however, was that many of the perceived risks attributed to AI were already present. AI did not create the assessment problem. It exposed it.

The Programme Context

The programme under review is designed for professionals studying part-time and at a distance. Students are geographically dispersed, often balancing study with work and caring responsibilities. The online format is therefore not a convenience but a core feature of the programme’s educational and widening participation mission.

Within this context, proposed responses to AI vulnerability included remote proctoring and, more significantly, a return to in-person invigilated exams. Both options would represent a fundamental shift in the nature of the programme, introducing new barriers related to cost, accessibility, travel, and student experience.

These proposals were understandable reactions to uncertainty, but they also raised a more fundamental question: are we trying to preserve a particular assessment format, or are we trying to preserve academic standards?

AI and the Turn to Control

Early discussions during the review framed Generative AI primarily as a threat to be managed. This framing tended to lead quickly to conversations about detection, surveillance, and enforcement.

Such approaches position academic integrity as a technical problem and assessment as a site of control. They also risk undermining trust between students and institutions, particularly in online and distance education, where physical supervision has never been the norm.

Remote proctoring, in particular, does not redesign assessment. It attempts to stabilise existing practices through increased observation, often at the expense of accessibility, inclusivity, and student confidence.

Exams Were Already Under Strain

A key insight from the review was that the programme’s reliance on time-limited, unseen exams pre-dated concerns about AI. These assessments primarily tested recall and procedural knowledge under pressure, rather than sustained understanding or professional judgement.

Such formats have long been questioned in relation to authenticity, validity, and alignment with real-world practice. When an assessment can be convincingly completed by a Generative AI tool, the issue is not simply misuse. It is whether the assessment was meaningfully aligned with the programme’s learning outcomes in the first place.

AI makes these misalignments more visible, but it does not cause them.

Reframing Academic Integrity

One of the most productive shifts during the review was reframing academic integrity as a question of assessment design rather than student behaviour.

Academic integrity is more likely to emerge when students are asked to engage in situated, applied work that requires interpretation, judgement, and contextual awareness. These qualities are difficult to outsource convincingly, regardless of the tools available.

This reframing places responsibility back with programme teams and institutions. Rather than asking how students can be prevented from using AI, the more constructive question becomes: what kinds of learning do we genuinely want to assess?

An Alternative Assessment Pattern

As part of the review, an alternative assessment was developed as an exemplar. It retained the existing learning outcomes and credit weighting but replaced the exam with a staged, applied task grounded in a realistic professional scenario.

Students were required to analyse a situation, justify decisions, and demonstrate how theoretical concepts informed their reasoning. The assessment was structured and supported, but the intellectual work was visible and traceable.

Importantly, this was not designed to be “AI-proof”. It was designed to be educationally coherent in a context where AI exists.

Standards, Alignment, and Confidence

A common concern when moving away from exams is that academic standards may be diluted. During the review, this concern was addressed through explicit mapping between learning outcomes, assessment tasks, and marking criteria.

In practice, the alternative assessment made standards more transparent rather than less demanding. Expectations were clearer, reasoning was foregrounded, and judgement was assessed directly.

From an external examining perspective, such assessments are often easier to defend than high-stakes exams whose validity and reliability are increasingly contested.

Assessment and Academic Presence

The review also highlighted the relationship between assessment design and academic presence. No assessment can compensate for disengaged teaching, but authentic assessments can support more meaningful forms of staff–student interaction.

They create opportunities for dialogue, feedback, and engagement with ideas over time, rather than focusing solely on performance in a constrained window. For online and distance programmes, this alignment between assessment and presence is particularly important.

Implications for Programme Teams

The central lesson from this programme review is not that AI requires emergency measures. It is that AI accelerates the consequences of unresolved pedagogical decisions.

Assessment redesign is a programme-level responsibility. It requires time, leadership, and collective ownership. Technical fixes and surveillance tools cannot substitute for thoughtful design.

Conclusion

Generative AI challenges higher education to be more explicit about what it values. The choice is not between exams and cheating, but between control and design, surveillance and trust.

For online and distance programmes in particular, the most sustainable response lies not in replicating campus-based controls, but in rethinking assessment in ways that align with educational purpose, professional practice, and student reality.

This programme review did not produce a single solution. It produced a clearer direction of travel, one that places pedagogy, rather than policing, at the centre of assessment in the age of AI.