Strong Evaluation in a Flat World: Resisting the Neutrality of Platforms

Introduction
Many educational technologies present themselves as neutral tools, interfaces for managing content, tracking engagement, or facilitating communication. Platforms such as Canvas, Turnitin, or learning analytics dashboards often appear as mere infrastructure, offering efficient solutions to logistical problems. Yet beneath their veneer of objectivity lies a set of normative assumptions about what learning is, how it should be measured, and what constitutes educational success.
To interrogate this, we turn to philosopher Charles Taylor, whose work on human agency challenges the idea that values are purely subjective or optional. For Taylor, human beings are not just choosers among alternatives, they are self‑interpreting animals, whose identities are shaped through qualitative distinctions about what is worth doing, being, and becoming (Taylor, 1985). He terms these distinctions strong evaluations, judgements that go beyond preference to express what we regard as higher, deeper, or more meaningful. These contrast with weak evaluations, where choices are made purely on the basis of instrumental reasoning, efficiency, or desire satisfaction.
Education, in this light, is never a neutral or technical enterprise. It is fundamentally a moral activity, shaped by strong evaluations, of what knowledge is valuable, what kinds of persons we want learners to become, and what forms of engagement are worth cultivating. Even decisions about curriculum design, assessment formats, or feedback practices rest on underlying visions of the good. To ignore these moral frameworks is to misunderstand what education is.
Yet digital platforms frequently conceal this moral dimension. Williamson (2017) observes how EdTech is framed as a domain of technical systems and data infrastructures, in which complex pedagogical questions are reduced to matters of optimisation and operational delivery. This framing both masks the value-laden choices embedded in platform design and displaces responsibility from educators to technical systems. When a learning analytics dashboard issues colour-coded alerts identifying “at-risk” students, it embodies assumptions about attendance, participation, and productivity, shaping the norms of the ideal learner rather than merely reflecting behaviours.
This critique is aligned with the work of Knox (2020) in Learning, Media and Technology, who argues that educational AI systems, especially in national contexts such as China, are not neutral tools but are deeply embedded in political, economic, and epistemological imaginaries. Such tools render some forms of knowledge visible while occluding others, thus operating as moral technologies that shape educational purpose and judgement. As Bayne (2015) has shown in her work on Teacherbot, even experimental uses of automation in education carry pedagogical assumptions that structure how teaching and learning are understood.
This post argues that educators must actively resist the flattening of educational value implicit in such systems. Platforms encode decisions about what counts, what is seen, and what matters. Their defaults are not neutral, they reflect particular epistemologies and institutional priorities. To adopt them uncritically is to risk outsourcing strong evaluation to unseen technical logics. In what follows, I draw on Taylor’s concept of strong evaluation to examine how digital platforms shape educational imaginaries, and how educators can reclaim their evaluative agency. Rather than accepting platform design as given, educators must ask: what moral frameworks are enacted through our digital tools, and do they align with the educational futures we wish to imagine?
Strong vs. Weak Evaluation
Should we praise students for effort or reward them for measurable achievement? Should we design courses for compliance or cultivate autonomy and critical engagement? These questions point to the kinds of value judgements that underpin educational practice, judgements that Charles Taylor (1985) calls strong evaluations. At the core of Taylor’s account of human agency is what he describes as a moral ontology: the idea that human beings are inherently evaluative, that we live by frameworks which structure our sense of what is good, admirable, or worth striving for.
Strong evaluation, in Taylor’s terms, is not about choosing between desires or outcomes based on which is most convenient or pleasurable. It involves qualitative distinctions, assessments that some actions or goals are better than others in a moral or educational sense. For example, a teacher’s decision to foster dialogic inquiry over rote learning is not merely a matter of preference; it reflects a belief that one form of learning is more worthwhile, more conducive to growth, or more aligned with human flourishing. Strong evaluations shape our identities and commitments, not just our behaviours.
In contrast, weak evaluations weigh alternatives in terms of instrumental reasoning, what satisfies our preferences, what achieves the quickest results, or what maximises efficiency. In education, this mindset is increasingly pervasive. Performance is reduced to metrics. Feedback is automated. Progress is tracked through dashboards. Within a weak evaluative frame, teaching becomes the delivery of content, and learning is validated through quantifiable indicators rather than qualitative growth. Yet as Taylor (1989) argues, such flattening of moral depth misrepresents the nature of human life. Our ability to reflect on and revise our desires, to aspire to what we take to be higher or more meaningful, is not incidental, it is constitutive of who we are.
This distinction matters profoundly in education. Teaching is not a morally neutral task. It always involves decisions, explicit or implicit, about what knowledge matters, what kind of learner is desirable, and what values underpin the learning process. Even platform features such as default deadlines, discussion visibility settings, or grade weighting reflect particular assumptions about autonomy, collaboration, and authority. To pretend otherwise is to obscure the evaluative work that educators do every day.
Yet digital platforms often invite us to operate within a weak evaluative register. Student dashboards display rankings and colour-coded risk indicators. Learning management systems reward frequency of logins. AI tutors optimise for correctness and speed. These systems may improve efficiency, but they narrow the space for ethical judgement. By privileging what is visible, measurable, and comparable, they marginalise the moral frameworks that give educational decisions their meaning.
To counter this drift, educators must actively foreground strong evaluation in their pedagogy. We must make visible the qualitative distinctions we are already making, and resist the subtle pressures of platform design that would have us behave as if values are irrelevant. Reclaiming this moral depth is not an add-on to educational technology. It is a precondition for using it well.
The Illusion of Platform Neutrality
Educational platforms often present themselves as neutral infrastructure, administrative services devoid of overt value claims, while subtly shaping how learning, participation, and success are defined. Systems such as Canvas, Turnitin, or ClassDojo commonly frame their interface and functionality as purely technical, thereby deflecting attention from the evaluative assumptions embedded in their design. This framing embodies what Chander and Krishnamurthy (2018) identify as the “myth of platform neutrality”, the idea that platforms act merely as intermediaries rather than agents with embedded norms and agendas.
However, platform design always encodes assumptions about pedagogy and behaviour. These systems function as normative frameworks, not passive tools. Williamson (2017) describes ClassDojo as an educational data assemblage, illustrating how its gamified interface, avatars, point systems, progress bars, performs moral work. Despite its branding as a communication app, ClassDojo privileges compliance, positivity, and punctuality: it constructs a particular model of an ideal learner and classroom discipline.
These dynamics are reinforced through default settings in Virtual Learning Environments (VLEs). For instance, standardised deadlines enforce temporal norms; hidden peer contributions devalue collaborative dialogue; and restrictive discussion settings privilege individual, quantified outputs. Learning dashboards frequently rely on traffic-light metaphors, colour-coded risk indicators tied to login frequency or submission punctuality. These design choices encode a narrow epistemology: learning is reduced to visible, measurable behaviours at the expense of deeper epistemic engagement.
This design logic reflects what Hamilton and Friesen (2013) critique as the binary of essentialism and instrumentalism in studies of educational technology. They argue that researchers and technologists often adopt one of two naïve positions: that technology inherently embodies pedagogical values (essentialism), or that it is a neutral tool for human-defined goals (instrumentalism). Both perspectives, they contend, obscure the critical reality that technology and pedagogy co-create each other: technologies are social objects that shape and are shaped by educational values and contexts :contentReference[oaicite:1]{index=1}.
These concerns align with broader critiques of technological determinism. Feenberg (1999) famously dismantles instrumentalism by showing that technologies are not neutral means to ends, but mediators of what ends are even conceivable. Similarly, Suárez‑Guerrero, Rivera‑Vargas and Raffaghelli (2023) identify the myth of neutrality as one of several ideological tropes that legitimise EdTech adoption. According to them, this myth obscures the ideological infrastructure of digital learning, particularly the emphasis on managerial efficiency, scalability, and behavioural control.
By framing platform design decisions as technical defaults rather than value-laden choices, providers render moral and epistemic commitments invisible. Interface metaphors, backend defaults, and data architectures all privilege particular educational imaginaries. Uncritically adopting these defaults cedes pedagogical judgement to technical logics, often misaligned with critical, student-centred values.
To counter this, educators must engage platforms on their own terms, not only evaluating outcomes, but interrogating how assumptions about learning, visibility, and compliance are embedded in the software itself. Understanding the epistemological and moral framings of platform features empowers educators to question: What forms of learning are made visible? What norms are privileged? What possibilities are constrained?
Challenging the myth of neutrality is therefore not peripheral, it is essential to ethical pedagogy. Platforms do not just support learning; they help define what learning can be.
Case Study: Learning Analytics Dashboards
Learning analytics dashboards (LADs) are increasingly used to guide decisions in higher education, offering real-time visualisations of student engagement, progress, and risk. They promise objectivity, delivering “just the data” to inform timely interventions. Yet their apparent neutrality conceals deeper assumptions about what counts as learning, who counts as a successful student, and how educational value is defined.
Consider Purdue’s Course Signals system, which uses predictive modelling to generate colour-coded alerts for students: green for on track, yellow for caution, and red for at risk (Arnold and Pistilli, 2012). These visual metaphors are not just user-friendly representations, they carry moral weight. Traffic-light colours reduce complex educational trajectories into simplified normative signals. Red denotes failure, green implies success. This visual logic renders students’ progress legible to institutional priorities, translating multidimensional learning into a digestible performance metric.
Other metaphors, bar charts, completion rings, badges, likewise embody pedagogical values. Bar charts suggest linear progress; badges reward punctuality or task completion. These visual grammars frame desirable behaviours and establish what is worth noticing. In this sense, dashboards are not merely technical artefacts. They are moral technologies, tools that encode and enact normative assumptions about what matters in education.
Empirical research supports this critique. A systematic review by Kaliisa et al. (2024) found mixed evidence that dashboards improve learning outcomes. While they can increase participation, they often shift attention toward behaviours that are easily tracked: frequency of logins, timeliness of submissions, or surface-level engagement. Students may end up “gaming the system” by focusing on what the dashboard sees, rather than engaging more deeply with learning.
The design of LADs thus shapes what becomes visible, and what remains invisible. Learning activities that cannot be easily measured, such as reflective writing, emotional labour, or peer support, are often excluded. This creates what Williamson (2017) calls educational data assemblages, platforms that curate reality through selective visibility. In doing so, they produce a moral ordering of students according to institutional logics of productivity and risk.
Even advocates of dashboard innovation acknowledge these risks. Khosravi et al. (2021) argue for “intelligent” LADs that maintain human-in-the-loop oversight, enabling educators to interpret data through their own pedagogical lenses. Their emphasis on explainability and transparency highlights a crucial point: without thoughtful mediation, dashboards can displace rather than support teacher judgement.
To be clear, dashboards are not inherently detrimental. When used reflexively and contextualised within a broader pedagogical framework, they can support self-regulated learning and timely support. However, they must be treated as interpretive tools, not definitive assessments. Educators should ask: What does this visualisation assume about learning? What does it reward or obscure? What alternative indicators might better reflect educational depth and student wellbeing?
In short, learning analytics dashboards do not simply report on learning, they shape it. They function as moral technologies, codifying educational values through design. To use them ethically, educators must engage critically with their assumptions and reclaim evaluative agency over what counts as meaningful learning.
Consequences for Judgement and Feedback
When a teacher’s first insight into a student’s progress comes from a dashboard alert or predictive score, something important is lost. Educational platforms increasingly mediate professional judgement through algorithmically generated data, traffic-light indicators, risk thresholds, activity logs, framing learning in terms of what can be visualised and compared. While such systems promise efficiency and scalability, they also constrain how educators make sense of students’ development, and how feedback is formulated and delivered.
Rather than engaging directly with student work, teachers may come to rely on visual representations of participation or performance. Learning analytics dashboards provide shortcuts to interpretation, highlighting who is “at risk” or “inactive.” Yet these indicators are not neutral; they shape what gets seen and what is deemed worthy of response. Williamson (2017) describes this as the creation of educational data assemblages, systems that produce legibility through selective visibility. A student’s nuanced effort or reflective growth may be ignored simply because it cannot be tracked.
The consequences for feedback are significant. Research shows that many dashboards reduce learning to surface behaviours: frequency of logins, time-on-task, punctual submission (Kaliisa et al., 2024). Feedback generated from such indicators risks being transactional, alerts to act, reminders to submit, rather than dialogic or developmental. Feedback literacy research suggests that meaningful feedback must engage learners in sense-making and foster agency, not merely deliver performance cues (Carless and Boud, 2018).
This shift also risks eroding professional judgement. Borrowed from labour studies, the concept of deskilling refers to the way technological systems can displace professional expertise by routinising tasks (Braverman, 1974). In education, this occurs when automated recommendations or predictive analytics are treated as authoritative, reducing the teacher’s role to that of a responder or overseer. The richness of pedagogical decision-making, interpreting context, understanding individual needs, making value-laden judgements, is replaced by data-driven prompts.
Khosravi et al. (2021), while advocates of learning analytics, acknowledge this danger. Their model of “intelligent dashboards” includes human-in-the-loop features to prevent automation from displacing human judgement entirely. They argue that explainability and transparency are essential to preserve educator agency. However, these safeguards are not always built into mainstream platforms, and where they are, educators may lack the time or confidence to critically engage with them.
To counter these trends, educators must actively reclaim their evaluative roles. This means resisting the temptation to treat dashboards and automated feedback as definitive sources of truth. Instead, teachers should contextualise analytics with their own insights, triangulate data with qualitative evidence, and remain attuned to what dashboards cannot see, motivation, confusion, ambition, or care. As Greller and Drachsler (2012) argue, data should inform pedagogical reflection, not replace it.
Ultimately, the feedback relationship in education is not merely transactional; it is ethical, interpretive, and situated. If educators cede too much authority to platforms, they risk hollowing out one of the most vital dimensions of teaching. Strong evaluation demands that feedback not only inform but also affirm values, what is worth knowing, becoming, and struggling for. That is not something a dashboard can decide.
Resisting through Value‑Explicit Pedagogy
If educational platforms encode particular values in their architecture and affordances, then educators must respond not with resignation but with deliberation. Value-explicit pedagogy, an approach that surfaces and enacts the underlying moral purposes of teaching, calls on educators to articulate their own evaluative frameworks in the design, enactment, and reflection of practice. This is not merely philosophical: it is a practical commitment to foregrounding what matters in education and to resisting the hidden curricula of platforms.
Such resistance begins with the cultivation of critical data literacy. This refers to the capacity of students and educators to interrogate data-driven systems, understanding who collects data, how metrics are defined, what gets counted, and what remains invisible. Sander (2020) argues that critical big data literacy requires not only technical skills but also the ability to question how data practices affect power dynamics, social justice, and democratic participation. This includes examining the values embedded in data infrastructures and the broader societal implications of algorithmic decision-making. Atenas, Havemann and Timmermann (2023) further emphasise that such literacy must be underpinned by data ethics, enabling learners to recognise the normative assumptions that shape data collection, interpretation, and use in educational contexts. By teaching students to critique dashboards, metrics, and feedback systems, we help them resist the framing of learning as performance management.
To move from critique to co-creation, educators can support participatory platform governance. This involves shared decision-making in the configuration and oversight of educational technologies, including the definition of data use policies, the visibility of analytics, and the design of assessment mechanisms. Emerging models such as data commons or platform cooperatives offer frameworks for collective stewardship over digital learning infrastructures (Zuboff, 2019; New America Foundation, 2025). Involving students in decisions about what counts as meaningful participation transforms them from data subjects into active agents of pedagogical meaning-making.
Equally important is the practice of dialogic pedagogy, a teaching approach grounded in dialogue, mutual recognition, and relational feedback. Dialogic methods resist transactional feedback loops by creating space for reflection, negotiation, and shared understanding. They counter the logics of automation by reasserting feedback as an act of ethical relation. Carless and Boud (2018) emphasise that feedback literacy is cultivated not through nudges or scoring but through co-construction, dialogue, and trust. Dialogic pedagogy therefore serves as both a pedagogical method and a mode of resistance.
These practices also respond to broader ethical concerns in digital education. Platform logics increasingly normalise surveillance, undermining student autonomy and reinforcing behavioural conformity. Manolev, Sullivan and Slee (2018) show how tools like ClassDojo contribute to a culture of performative classroom discipline, where reputation and compliance displace exploration and dissent. Critical pedagogy demands that educators challenge datafication by foregrounding transparency, securing meaningful consent, and prioritising student wellbeing over institutional metrics.
This is ultimately a question of power and purpose. If we allow platforms to define learning through behavioural proxies and predictive scores, we surrender the imaginative space of education to technical rationality. Instead, value-explicit pedagogy reclaims that space by insisting on moral discernment and participatory design. It enacts Taylor’s (1985) concept of strong evaluation, grounding teaching in qualitative distinctions about what is worth knowing, doing, and becoming.
Practical steps might include co-designing dashboards that prioritise reflection over compliance, integrating narrative journals as assessment artefacts, or forming classroom data councils to examine what gets tracked and why. Such actions reposition both teachers and students as co-constructors of meaning, actively shaping the moral and educational futures they inhabit.
Conclusion
Charles Taylor reminds us that human beings are self‑interpreting animals, constantly making sense of themselves and their world through qualitative distinctions, about what is worth pursuing, becoming, or upholding. This capacity for strong evaluation is not a secondary feature of human life; it is foundational to agency itself (Taylor, 1985). Through strong evaluation, individuals articulate their values, shape their identities, and orient their actions in ways that go beyond mere preference or instrumental calculation.
When educational technologies present themselves as neutral or “just tools,” they obscure the evaluative assumptions that underlie their design. Interfaces, algorithms, and data models suggest what counts as success, what should be seen, and what ought to be ignored. To treat such systems as value-free is to abdicate a core pedagogical responsibility: the responsibility to judge, interpret, and act in ways that are aligned with educational purpose. It is to surrender strong evaluation in favour of procedural compliance with platform logic.
This abdication is not benign. When educators rely uncritically on dashboards, default rubrics, or predictive analytics, the moral and pedagogical dimensions of judgement are flattened. Feedback risks becoming a transactional output rather than a dialogic process. Pedagogical relationships are reduced to flows of data. Over time, this not only narrows the scope of what counts as teaching and learning, it can also deskill educators by displacing their practical and ethical judgement with automated proxies.
Yet this trajectory is not inevitable. Educators can reclaim their agency by designing and enacting pedagogy as if values matter. This means bringing to the surface the evaluative frameworks that underpin our choices, what we believe constitutes meaningful learning, equitable participation, or student flourishing. It also means building capacity to interrogate the values embedded in platforms themselves.
Consider, for example, what it would mean to create learning environments where relational care, critical reflection, or dissent were treated as core indicators of learning rather than anomalies. What if students were invited to critique the metrics by which they are measured, or to co-construct the evaluative criteria used in assessment? What if platforms were designed with the participatory governance of educators and learners, not just engineers and administrators? These are not utopian ideals, they are pedagogical responsibilities rooted in a commitment to education as a value-laden practice.
Such practices enact Taylor’s vision of human agency. They affirm that educators and students are not passive recipients of platform design but active interpreters of what matters. In resisting the proceduralism of platform neutrality, we keep alive the ethical horizon of pedagogy.
The concluding provocation, then, is both philosophical and practical:
What would it mean to teach and design as if values matter?
Not as an abstract slogan, but as a guiding principle for every interface, assessment, and interaction in digital education. In doing so, we reclaim our roles as educators, not as implementers of platform logic, but as custodians of meaning.
Bibliography
- Arnold, K. E. and Pistilli, M. D. (2012) ’Course signals at Purdue: using learning analytics to increase student success’. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK ’12). Association for Computing Machinery, New York, NY, USA, 267–270. https://doi.org/10.1145/2330601.2330666
- Atenas, J., Havemann, L. and Timmermann, C. (2023) ’Reframing data ethics in research methods education: a pathway to critical data literacy’, International Journal of Educational Technology in Higher Education, 20:11. https://doi.org/10.1186/s41239-023-00380-y
- Bayne, S. (2015) ’Teacherbot: interventions in automated teaching’, Teaching in Higher Education, 20(4), pp. 455–467. https://doi.org/10.1080/13562517.2015.1020783
- Braverman, H. (1974) Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. New York: Monthly Review Press.
- Carless, D. and Boud, D. (2018) ’The development of student feedback literacy: enabling uptake of feedback’, Assessment & Evaluation in Higher Education, 43(8), pp. 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
- Chander, A. and Krishnamurthy, V. (2018) ’The Myth of Platform Neutrality’. Georgetown Law Technology Review, Vol. 2, pp. 400-416. http://dx.doi.org/10.2139/ssrn.4849156
- Feenberg, A. (1999) Questioning Technology. London: Routledge.
- Greller, W. and Drachsler, H. (2012) ’Translating learning into numbers: A generic framework for learning analytics’, Educational Technology & Society, 15(3), pp. 42–57.
- Hamilton, E. and Friesen, N. (2013) ’Online education: a Science and Technology Studies perspective’, Canadian Journal of Learning and Technology, 39(2), pp. 1–21. https://doi.org/10.21432/T2001C
- Kaliisa, R., Misiejuk, K., López‑Pernas, S., Khalil, M. and Saqr, M. (2024) ’Have learning analytics dashboards lived up to the hype? A systematic review of impact on students’ achievement, motivation, participation and attitude’, in Proceedings of the 14th Learning Analytics and Knowledge Conference (LAK ’24), Kyoto, Japan, 18–22 March. New York: ACM, pp. 295–304. https://doi.org/10.1145/3636555.3636884
- Khosravi, H., Shabaninejad, S., Bakharia, A., Sadiq, S., Indulska, M. and Gasevic, D., 2021. ’Intelligent Learning Analytics Dashboards: Automated Drill-Down Recommendations to Support Teacher Data Exploration’. Journal of Learning Analytics, 8(3), pp.133-154. https://doi.org/10.18608/jla.2021.7279
- Knox, J. (2020). ’Artificial intelligence and education in China’. Learning, Media and Technology, 45(3), pp. 298–311. https://doi.org/10.1080/17439884.2020.1754236
- Manolev, J., Sullivan, A. and Slee, R. (2018) ’The datafication of discipline: ClassDojo, surveillance and a performative classroom culture’, Learning, Media and Technology, 44(1), pp. 36–51. https://doi.org/10.1080/17439884.2018.1558237
- New America Foundation (2025) All Aboard: The Ethics of Campus AI and Higher Education’s New Trolley Problem. New York: New America. Available at https://www.newamerica.org/oti/briefs/new-trolley-problem/
- Sander, Ina (2020). What is critical big data literacy and how can it be implemented? Internet Policy Review,[online] 9(2). Available at https://policyreview.info/articles/analysis/what-critical-big-data-literacy-and-how-can-it-be-implemented [Accessed: 27 Jul. 2025].
- Suárez‑Guerrero, C., Rivera‑Vargas, P. and Raffaghelli, J. E. (2023) ’EdTech myths: towards a critical digital educational agenda’, Technology, Pedagogy and Education, 37(1), pp. 1–16. https://doi.org/10.1080/1475939X.2023.2240332
- Taylor, C. (1985). ’What Is Human Agency?’ In Philosophical Papers. Cambridge University Press, pp. 15–44.
- Taylor, C. (1989) Sources of the Self: The Making of the Modern Identity. Cambridge: Cambridge University Press.
- Williamson, B. (2017) Big Data in Education: The Digital Future of Learning, Policy and Practice. London: Sage Publications.
- Williamson, B. (2017) ’Learning in the “platform society”: Disassembling an educational data assemblage’, Research in Education, 98(1), pp. 59–82. https://doi.org/10.1177/0034523717723389
- Zuboff, S. (2019). The Age of Surveillance Capitalism : The Fight for a Human Future at the New Frontier of Power. First edition. New York, NY: PublicAffairs.