Reclaiming Assessment Through Annotation: Critical, Authentic, and AI-Resilient Pedagogy in Practice

Introduction: Why Annotation Now?
I’ve recently been reading Annotation by Remi Kalir and Antero Garcia, and it’s hard to overstate the resonance it has had with my own work in online and distance education. They present annotation not as a marginal classroom activity or technical functionality, but as a genre in its own right: a synthesis of reading, thinking, writing, and communicating. Annotation, in their hands, becomes a fundamentally social, political, and pedagogical act.
This has significant implications for how we think about assessment in higher education - especially in online and distance contexts where students can feel dislocated from both the material and each other. At a time when surveillance-based assessments and platform-driven standardisation dominate the landscape, annotation offers a compelling alternative: human, situated, dialogic.
In this post, I want to explore how annotation can be used as a form of assessment - one that aligns with three key commitments in my practice: Critical Digital Pedagogy, Authentic Assessment, and AI-Resilient Assessment.
Annotation as a Critical Digital Pedagogical Act
Critical Digital Pedagogy insists that we must centre the lived experiences of learners, interrogate the assumptions built into our tools, and reimagine pedagogy as a co-constructed and ethical practice. Annotation sits squarely within this frame. It disrupts the notion of knowledge as fixed and delivered. Instead, it encourages students to engage in ongoing dialogue - with texts, with peers, and with themselves.
When students annotate, they assert presence. They bring in context, lived experience, intertextual insight. Rather than hiding behind the false neutrality of the LMS or the standardised quiz, annotation foregrounds positionality and process. It invites marginalia back into the centre. In doing so, it opens a space for critical questioning - of content, of authority, and of the structures that shape what counts as knowledge.
For example, asking students to annotate a university policy document (on, say, inclusivity or AI use) encourages not just critical reading but also civic engagement. It makes visible the assumptions baked into institutional language. It reclaims the margins as places of intellectual and political agency.
Designing Annotation-Based Authentic Assessments
Authentic assessment prioritises real-world tasks, meaningful audiences, and the integration of knowledge with practice. Annotation supports all three. When students annotate a source, they are enacting interpretation. They are making the invisible processes of learning - curiosity, confusion, connection - visible.
Here are a few ways annotation can function as authentic assessment:
- Critical Text Annotations: Students annotate primary texts (articles, reports, legislation, literary works) with analytical or reflective commentary. Annotations can include citations to other texts, contextual framing, or critical questions.
- Collaborative Annotation Projects: Using tools like Hypothes.is, students can co-annotate a shared reading, responding to each other, building threads of discussion, and surfacing diverse perspectives.
- Multimodal Annotation: Students annotate videos, images, or audio using time-stamped comments or overlay tools. This is particularly powerful for disciplines like media studies, education, or health care where contextual analysis is key.
- Public-Facing Annotation: Annotations published openly can invite engagement from broader audiences, whether on public documents, open educational resources, or blogs. This adds a layer of accountability and relevance.
Crucially, annotation need not be confined to the margins. Students can submit reflective overviews of their annotation process - what changed in their understanding, how they responded to peers, what sources influenced their reading. These reflections themselves become valuable artefacts of learning.
Annotation as AI-Resilient Assessment
As AI-generated text becomes increasingly difficult to distinguish from human writing, educators face a growing challenge: how do we ensure our assessments reflect student thinking, not machine output?
Annotation offers one answer.
First, it resists large-scale automation. While generative AI can produce essays or summaries, it struggles with the kind of situated, incremental, and relational thinking that annotation demands. The marginal note that responds to a peer’s earlier comment, or that links a passage to a lived experience, or that draws on a specific in-class discussion - these are traces of human presence that resist replication.
Second, annotation can be framed as time-bound, collaborative, and process-oriented. Asking students to annotate a reading over a period of a week, in response to unfolding conversations, creates a context that is hard for AI to fake convincingly.
Finally, annotation foregrounds learning in progress. Where a polished essay can be ghostwritten or generated, annotations expose the messy, partial, situated work of interpretation. In doing so, they help us assess not just what students know, but how they think.
Platforms, Tools, and Pragmatics
There are a range of tools that support digital annotation, from open platforms like Hypothes.is to in-LMS PDF annotation features. The key is to choose tools that align with our values:
- Transparency: Students should know what data is collected, and by whom.
- Accessibility: Annotation must be inclusive of different devices, access needs, and digital literacies.
- Ethical Infrastructure: Tools should support co-creation and protect student privacy.
Annotation also requires pedagogical scaffolding. Many students are unfamiliar with meaningful annotation beyond simple highlighting. We need to teach annotation as a genre: offer exemplars, develop shared rubrics, and invite meta-reflection.
Annotation rubrics might include:
- Insightfulness of comments
- Intertextual connections
- Responsiveness to peers
- Use of evidence or context
- Clarity and tone of writing
These criteria help surface the intellectual labour of annotation and provide structure without standardisation.
Critical Challenges and Institutional Constraints
Of course, annotation is not a silver bullet. There are challenges:
- Performativity: Some students may annotate mechanically to meet requirements.
- Tool limitations: Not all platforms integrate smoothly with institutional systems.
- Staff resistance: Annotation may be viewed as informal or insufficiently rigorous.
- Platform risk: Some annotation tools engage in extractive data practices or limit portability.
But these challenges are not unique to annotation - they’re part of the broader crisis of platformed education. They invite us to be intentional about our tools and transparent about our pedagogies.
Annotation, if thoughtfully implemented, can challenge the idea that assessment must be individual, secret, and summative. It reclaims formative feedback, dialogic process, and critical interpretation as central to assessment.
Conclusion: A Pedagogy of Presence, Not Proctoring
In an era of remote proctoring, plagiarism detection, and platform-driven assessment, annotation is a radical act. It says: learning is not about performance under surveillance. It’s about presence. It’s about engagement. It’s about the messy, joyful, contested work of making meaning together.
By embedding annotation into our assessment practices, we can design learning experiences that are critical, authentic, and resistant to the dehumanising logics of automation. We can help students not just pass, but participate. Not just answer, but annotate.
Let’s bring the margins back to the centre.
Have you used annotation as a form of assessment? What tensions or possibilities have you encountered? Let’s continue the conversation.