Designing AI-Resilient Assessments in Online and Distance Education

As AI tools like ChatGPT become more sophisticated and accessible, educators - especially those working in online and distance learning - are facing a rapidly shifting assessment landscape. There’s growing unease about academic integrity: Can we still trust that students are submitting their own work? Can our current assessment practices withstand the ease and plausibility of AI-generated content?
These are valid concerns, but the moment calls not for panic, but for reflection. In my own work developing an Online Teaching Course for higher education professionals, I’ve been grappling with this very challenge. Rather than walling off learning from AI, the more generative approach is to design assessments that both leverage and withstand AI’s influence. In this post, I explore what AI-resilient assessments might look like in an online and distance learning context - and why we urgently need to be thinking about them.
The Disruption of Assessment Integrity by AI
In a recent exchange with a colleague, we were discussing an undergraduate module where several assignments seemed suspiciously polished. “At this point,” they said, “we’re just testing how well students can write prompts into ChatGPT.” That comment stuck with me - not because it was wrong, but because it surfaced a truth that many of us are still reluctant to confront: if a student can ace an assessment by copying and pasting into an AI tool, perhaps the assessment is no longer fit for purpose.
This challenge is especially acute in online and distance education, where there is often a heavier reliance on text-based submissions and far less opportunity for real-time dialogue. Asynchronous learning has many benefits, but it also opens the door wider to generative shortcuts. In this context, designing assessments that meaningfully engage learners - not just evaluate output - becomes a critical pedagogical imperative.
What Does “AI-Resilient” Actually Mean?
When I use the term AI-resilient, I don’t mean assessments that are AI-proof. There’s no such thing. Rather, I mean assessments that are robust against inappropriate AI use because they ask something deeper of the learner - something that current AI, for all its fluency, can’t convincingly fake.
In my Online Teaching Course prototype, I’ve begun introducing exercises where learners must analyse an AI-generated lesson plan, critique its assumptions, and suggest improvements. The twist? They then have to reflect on whether they themselves might have made similar errors. It’s not about whether the AI “got it right,” but whether the learner can engage critically with what it produced.
That’s the core of AI resilience: assessments that foreground thinking over product, process over polish, and learner voice over generic prose.
Practical Strategies from the Online Teaching Course
In developing this course, I’ve drawn on a number of strategies that make AI misuse less attractive - and authentic learning more central:
- Authentic Tasks: One module invites learners to redesign a learning activity based on real challenges in their own institution. The deliverable isn’t just a document; it’s a narrated slide deck explaining their thinking.
- Layered Reflection: Students submit an initial design idea, then record a short “thinking aloud” reflection as they revise it based on peer feedback. These moments of self-explanation are not just learning checkpoints - they’re also difficult for AI to replicate convincingly.
- Staged Submissions: Tasks are broken into phases - plan, draft, feedback, final version with self-commentary. The progression matters as much as the final result, and encourages deep engagement.
- Asynchronous Vivas: Using Canvas Studio, learners post short videos responding to prompts like “What part of your thinking changed during this task?” or “Where did you encounter difficulty?” This blends convenience with accountability.
From Threat to Tool: Rethinking AI as a Partner
During our recent team meeting, one colleague made a compelling point: “If students are going to use AI anyway, maybe the better path is to help them use it well.” That shift - from adversarial to pedagogical - resonated strongly.
In the Online Teaching Course, I’ve designed a task where students generate a brief summary of an educational article using an AI tool, then critique its omissions, biases, and strengths. The AI output becomes a starting point, not an endpoint. This teaches not only digital literacy, but also the crucial ability to detect superficiality - a skill more important now than ever.
Ethical use of AI can be embedded into the assessment itself. Rather than banning it outright, we ask learners to document how they used it and reflect on its impact on their thinking. This makes space for transparency and positions AI as a tool, not a shortcut.
Supporting Educators and Institutions
Of course, none of this works in isolation. Designing AI-resilient assessments demands institutional support, time, and pedagogical space. That means:
- Updated Academic Integrity Policies: Institutions must articulate where AI use is appropriate, and where it crosses a line.
- Professional Development: Academics need time and support to redesign assessments. One-off workshops won’t cut it.
- Feedback over Surveillance: Rather than investing in increasingly invasive proctoring systems, let’s invest in better feedback practices and more engaging tasks.
Our recent transition to Canvas has enabled some of this - tools like Studio and integrated rubrics make it easier to implement scaffolded, multi-modal tasks. But the real shift is cultural: moving from an “enforcement” mindset to a “design” mindset.
What Educators Are Saying: A Shared Realisation
One comment from a colleague lingers: “If students can pass just by using ChatGPT, that says more about our assessment than about their ethics.” That hit home.
It’s not that writing is obsolete - far from it. But we must re-evaluate the kinds of writing we ask students to produce. A 1,500-word essay in isolation, without scaffolding or personalisation, is now deeply vulnerable to automation. But writing grounded in experience, iterative thought, and peer dialogue? That’s still very much human work.
Conclusion: Design Over Detection
The temptation in this moment is to double down on detection and restriction. But the more sustainable path is to focus on design. AI is not going away - nor should it. It will become an embedded part of our professional and educational lives.
Our role, then, is to design assessments that remain meaningful in that world. That engage the learner as a thinking, feeling, situated human being. That invite process, reflection, collaboration, and critical inquiry.
In short, assessments that AI can’t fake - because they ask the learner to be real.
Let’s keep the conversation going. What are you trying in your own teaching? How are you adapting your assessments for an AI-infused world? Leave a comment or get in touch - I’d love to hear how others are responding to this evolving challenge.