Opening
When ChatGPT was released to the public in late 2022, I was working at the University of Helsinki’s Global Campus project, an internal project exploring how emerging technologies like generative AI and extended reality might transform online education and how we could leverage them. In hindsight, the timing feels almost unreal. Here we were, a small team tasked with figuring out what these tools could do for higher education, and suddenly the world’s most capable chatbot had just been dropped to anyone with a browser.
I was nothing short of enthusiastic. I started tinkering with ChatGPT, built a workshop around it, and eventually created a training programme I delivered across faculties, to admins and teaching staff. I believed, and still largely believe, that this tool, used thoughtfully, has genuine educational value.
What I did not do, was ask how they had been built.
I don’t exactly recall when I first learned about problems with ChatGPT and Midjourney, but at some point I was made aware of copyright issues. There were the odd reports on lawsuits from writers and visual artists discovering their work had been scraped without consent. Environmental concerns and human rights violations came much later: water consumption figures attached to data centre locations, quietly alarming numbers attached to energy use, a report about content moderation workers in Kenya earning a few dollars an hour to screen traumatic material so that language models could learn what not to say. I must have registered each piece of information but didn’t really react – not really internalising, just continuing to do the thing I was already doing. Yet, there was a mild discomfort forming.
I am writing this essay because I think that mild discomfort – and what we do with it, or don’t do with it – is actually the centre of the story. The conversation about “ethical GenAI use” has grown enormously in the years since. I have participated in it professionally and taken it seriously. And I have also noticed that it can function, for me and for others, as a way of feeling responsible without necessarily being consistent. That gap, between the ethics we apply to new technology and the ethics we apply to everything else, is what I want to examine here.
I - The checklist has arrived
At some point in the last three years, “ethical AI use” became a genre. You know it when you see it: the institutional guidelines with their numbered principles, the training modules with their reflection prompts, the policy statements that open with acknowledgement of complexity and close with a list of dos and don’ts. Universities produced them. Ministries produced them. Professional associations produced them. I produced some of them, or at least contributed to the culture that made them feel necessary and sufficient.
I am not being dismissive. The checklist covers real ground. Transparency: be honest with your audience about what was generated and how. Plagiarism and academic integrity: the question of authorship when the machine does the writing. Bias in outputs: the well-documented tendency of systems trained on skewed data to reproduce and amplify that skew. Copyright: the unresolved and genuinely complex question of what it means to train on human creative work without consent or compensation. Environmental cost: the water and energy consumed by training LLMs and by the infrastructure that makes the chat window on your screen feel effortless. These are not trivial concerns. They deserve the serious attention they have received.
I have taught this material. I believe most of it.
But the checklist has a shape, and the shape is worth examining. Almost everything on it points in the same direction: outward, and downward. Toward the student submitting work that possibly isn’t fully theirs. Toward the technology company that scraped without asking. Toward the tool that reproduces stereotypes. Toward the individual user who didn’t disclose. The person holding the checklist – the institution, the educator, the policymaker, the workshop facilitator – stands at the top of the frame, doing the scrutinising. The ethical gaze moves in one direction.
Shannon Vallor writes that AI is a mirror, that what we see in it reflects our values, our blind spots, our distortions back at us. It is a powerful metaphor, and I have used it in my workshops more than once. What I have been slower to ask is who dares to look into the mirror – and accept what comes back.
Because the mirror only works if something precedes the looking. Not more knowledge, not better tools, but a cultivated willingness to be unsettled by what you find. Self-awareness is not a default state. It is built – slowly, through practice, through education at its most demanding. Without it, you can stare into the mirror as long as you like and the reflection tells you nothing. You glance, you move on.
This raises an uncomfortable question about what AI training programmes – including the kind I have been running for the past several years – are actually for. If the goal is to smooth adoption, reduce anxiety, and equip people with practical skills, that is a legitimate goal. But it is not the same goal as building the capacity for honest self-examination. Training designed primarily to make people comfortable with AI polishes the mirror rather than preparing the viewer. And a polished mirror, pointed outward, reflecting everything except the person holding it, is not an ethical instrument. It is a reassurance. Without that capacity for honest self-reflection, the checklist is just optics.

II - The Sore Truth
Let me say this plainly.
There is a double standard at work in how we talk about AI ethics, and it operates so consistently that it is hard to call it accidental. One standard is applied to the new and visible: the tool, the student, the technology company. A second, considerably more lenient standard is applied to everything else – to the institution, to the educator, to the longstanding practices that predate the arrival of ChatGPT and will outlast whatever comes next. We apply one standard to the tool and a considerably more lenient one to ourselves.
The pattern is not hard to recognise. We scrutinise what is new and visible, and we leave the rest largely alone. The institution that produces AI ethics guidelines might not apply the same rigour to its own data practices, its supervision culture, or how it communicates difficult decisions. And we, as individuals, are not so different. The question of what we eat, how we travel, whose labour we depend on without examining – these sit in the same unexamined background as the institutional practices we are quick to critique.
And then we write guidelines about whether students should disclose their use of generative AI.
None of this cancels out the concerns about generative AI. Bias in training data is real. The environmental cost is real. The questions about authorship and consent are real and unresolved. The point is not that AI ethics doesn’t matter. The point is that ethics applied selectively, rigorously to what is new and external, leniently to what is familiar and internal, is not ethics. It is reputation management. It is the appearance of seriousness without the substance of consistency.
There is a harder example that I find genuinely difficult to sit with. The tools I use in my work – the language models I have built workshops around and written simulations with – exist on a continuum with systems being deployed for autonomous targeting, battlefield decision support, and drone strike authorisation. Not metaphorically. The companies are often the same. The infrastructure is shared. The mathematics is identical. The civilian framing of these technologies – creative, democratising, helpful – makes it easier not to ask what else they are doing and for whom. I do not have a clean answer to this. But consistent ethics would require asking the question, and I have mostly not asked it. I am asking it now, belatedly, trying to understand the whole scope.
This is not a confession designed to perform humility and then move on. It is the actual problem this essay is about. We have developed a sophisticated vocabulary for scrutinising AI, while the larger questions about power, accountability, and what these systems are for remain under-asked. Not because we are cynical. Because it is easier.
The person who looks at all of this and decides not to engage with these tools is not behind the curve. They may be the most consistent person in the room.
III — Why Selective Ethics Is So Attractive
Call it cognitive dissonance, call it a double standard – either way, we say one thing and do another without really noticing we are doing it. Most of us genuinely don’t see it. We care about the things we say we care about. The guidelines get written in good faith.
The double standard comes from something more ordinary than bad intention: a very human tendency to scrutinise what is new and unfamiliar, while leaving the practices we have grown up with, built our careers inside of, and come to experience as simply how things work, largely unexamined.
And there is something genuinely easier about directing ethical concern at a tool than at a culture, a habit, or a relationship. A tool can be assessed, regulated, disclosed, or avoided. A culture is something you are inside of, that involves people you know and depend on, that has given you things you value. Raising hard questions about it carries a cost that raising hard questions about a software product simply does not. That asymmetry is not a character flaw. It is the ordinary friction of self-examination — and it applies to all of us, not just to those in positions of authority.
It is also worth remembering where we are in the cycle. The internet provoked intense ethical debate at its emergence – misinformation, the erosion of expertise, attention fragmentation, the death of the bookshop, remember? Wikipedia was going to make us stupider. Google was going to make us lazier. Some of those concerns turned out to be well-founded. Others dissolved as the technology became infrastructure, mundane and embedded, and the next new thing arrived to absorb the anxiety. Nobody today pauses before a Google search to consider its carbon footprint; not because the energy question disappeared, but because the novelty did. GenAI is almost certainly on a similar trajectory. The Gartner Hype Cycle has a trough of disillusionment for a reason.
I say this not to dismiss the concerns, the IEA data is real: global AI electricity demand reached 415 terawatt-hours in 2024, with projections suggesting it will double by 2030. The bias is real. The questions about consent and authorship are genuinely unresolved. I say it because some of what we are calling GenAI ethics at this particular moment is anxiety about novelty dressed in ethical language. And novelty anxiety, however sincerely felt, is not the same as a principled ethical position. It is worth knowing the difference, especially when you are the person facilitating the workshop.
Performing ethical seriousness about generative AI has become a mark of sophistication. To raise concerns about bias, environmental cost, or transparency signals that you are thoughtful, critical, alert to power. The signal is not false. But it is available at a price that doesn’t require you to examine your own data habits, your own consumption choices, or your own contribution to the cultures you’re critiquing. The academic who chairs a session on AI accountability and then drives home alone when the train runs every fifteen minutes is not unusual. Neither is the workshop participant who nods at the slide about AI’s environmental cost and orders a steak at the conference dinner. We perform the concern that is visible and socially rewarded. The rest we leave alone.
Recent (yet unpublished) research by Oldemburgo de Mello, Inzlicht and colleagues suggests something worth sitting with: for some of us, opposition to generative AI is not really about weighing risks and trade-offs at all. It is about deeply held moral commitments that function more like sacred values – positions that counter-evidence does not update, but entrenches. Once an ethical stance hardens into identity, it stops processing information as information. That is not a character flaw. But it is worth noticing, because it is also the point at which ethical reasoning stops.
Something sharper is also at work. When we promote AI literacy – when we build frameworks, design curricula, run workshops – we are doing something valuable. But we are also, as a quiet side effect, making a claim about inevitability. Literacy frameworks presuppose the thing you need to be literate about. Nobody builds a nuclear weapons literacy programme for citizens. The choice to frame GenAI as something to be navigated rather than questioned is already a position; it just doesn’t announce itself as one. The terrain has been decided before the conversation starts.
This is not an argument against AI literacy. It is an argument for noticing what the literacy frame cannot ask — and making sure someone is asking it. Maja Göpel, speaking at OEB 2025: optimisation narratives are choices, not destiny. The literacy framework is one of those narratives. Choosing it is fine. Choosing it without noticing you chose it is the problem.
James O’Sullivan wrote in 2025 that there may be a case for AI illiteracy, for deliberate disengagement as a valid ethical stance. I am a convinced adopter and I am not making that argument. But I find his position useful in the way that a well-placed question is useful: it exposes what the literacy discourse structurally cannot say. The person who decides not to engage with these tools is not simply behind. They may have looked at the terrain and decided they do not accept the premise. That is not a failure of literacy. It is, potentially, an exercise of exactly the critical capacity that good literacy education is supposed to produce.

IV — What Consistent Ethics Would Actually Demand
Is applying ethics consistently even possible? Can we scrutinise how others use GenAI while leaving our own daily choices: what we eat, how we get to work, what we buy, where we fly, quietly unexamined? The honest answer is probably: not fully. Not all the time. But that is not an argument for giving up on consistency. It is an argument for being honest about where our attention goes and why.
This is not a point about any particular lifestyle choice. It is an observation about how moral attention works. We cannot apply rigorous ethical scrutiny to everything simultaneously. That is not how attention functions, and the person who claims otherwise is either not examining much or performing a consistency they don’t actually have. The question is not whether we are selective. We all are. The question is whether we notice the pattern in our selections, and whether that pattern is something we have chosen or something that has been chosen for us by the visibility and novelty of whatever the current conversation happens to be about.
Consistent ethics would not demand that we scrutinise everything at once, that is an impossible standard and a paralysing one. It would demand something more modest and more uncomfortable: that we notice when we are being genuinely rigorous and when we are performing rigour. That we periodically ask whether the things we scrutinise most loudly are the things that most deserve scrutiny, or simply the things that are most visible right now.
In practice, what does this look like? It looks like the person who writes AI ethics guidelines also examining, with equivalent seriousness, the supervision culture they operate in, the hiring decisions they make, their relationship with precarious colleagues. It looks like the educator who asks students to reflect on their use of a chatbot also asking themselves what habits they are modelling in how they run a seminar, manage a disagreement, or communicate a difficult decision. It looks like being as honest about the unexamined corners of our own practice as we are about the IEA data on the slide.
None of this is comfortable. That is precisely the point. Ethics is not supposed to be comfortable. It is supposed to be consistent. The moment we reserve it for what is new, and convenient to scrutinise, and located safely outside ourselves, it stops being ethics and becomes something else, a performance of moral seriousness that leaves the audience, including ourselves, feeling better than the situation warrants.
There is a version of this essay that ends here, with a call to do better. I am not writing that version. Not because doing better doesn’t matter, but because I think the call to action is, in this context, one more performance. What I am arguing for is something prior to action: a habit of noticing. Noticing when you apply scrutiny and when you don’t. Noticing what the discourse around you makes visible and what it leaves in the blur. Noticing when you are genuinely reasoning about ethics and when you are, in Inzlicht’s terms, operating from a sacred value that no evidence will touch.
That noticing is not natural. It is not automatic. It is, as I argued earlier in this essay, something that has to be built, through education that is willing to unsettle rather than reassure, through training that prepares the viewer rather than polishes the mirror. It is, in other words, the thing that AI literacy programmes, at their most honest and most demanding, could actually try to produce.
Whether they do is another question.
Closing
There is a document circulating on LinkedIn. It is called Kansalaisten tekoälyosaamisen viitekehys, a framework for citizens’ AI competence produced by Finland’s Ministry of Education and Culture, published in early 2026. It was developed over a year by a working group of researchers, educators, and experts from across Finnish universities and institutions. It is a serious document, carefully made.
At its most demanding level – the level it calls kehittäjä, developer – it asks citizens to do the following: evaluate the long-term societal, ethical, and sustainability impacts of both their own and others’ actions. Promote and build accountability and governance structures that support ethical, human-centred, and sustainable AI development. Lead the public conversation about AI, creating new openings and trajectories.
I find this genuinely moving, and also genuinely unsettling
Moving, because it represents something real: a national institution taking seriously the idea that AI literacy is not just a technical skill but a civic and moral one. The framework does not ask citizens to become prompt engineers. It asks them to become the kind of people who think carefully about consequences, their own and others’. That is a meaningful ambition, and it did not have to be in the document, and it is.
Unsettling, because the question it raises is the same one this essay has been circling from the beginning. The ministries, universities, and research institutions that produced this framework – that ask citizens to build accountability structures and lead the conversation – are themselves organisations. They have hiring practices, power dynamics, resource allocation decisions, and communication cultures that are all, in principle, subject to exactly the kind of ethical scrutiny the framework describes. The question of whether they apply that scrutiny to themselves, with the same rigour they are now asking of citizens, is not a rhetorical one. It is an open one.
I am not in a position to answer it. I work inside one of those institutions. I have spent the last several years promoting the adoption of the very technologies this essay has been examining, building tools with them, running workshops that teach others to use them. I have been, in the language of the framework, somewhere between applying and developing – while the harder questions about consistency and accountability were easier to raise in slides than to live by.
That calculation – to return one last time to the line from my own workshop materials, is one only you can make. And it is worth making consciously. Not just about how much energy your generative AI queries consume, or whether you disclose your use of it to your students. About all of it. About whether the ethics you bring to new technology is the same ethics you bring to everything else, or whether it is something more convenient: a performance of responsibility that leaves the harder questions quietly alone.
The framework exists. The expectations are now written down, in careful Finnish, by serious people, with the weight of a ministry behind them. Whether the institutions holding that document and the people inside them, myself included are willing to be held to it is the question the document cannot answer for us.
Epilogue
There is one question this essay has not asked, and the omission is not accidental. We have talked about transparency, about bias, about energy, about who discloses what to whom. We have not talked about who benefits.
The companies building and deploying these systems are not doing so as a public service. They are doing so because it is profitable, and it is profitable in a specific way: AI is cheaper than people. When a firm replaces ten workers with one system, the productivity gain does not go to the ten workers. It goes to the firm. This is not a critique of technology. It is an observation about how technology is owned.
The effective altruism movement and several prominent figures in the AI industry have offered a counter-argument: automation creates abundance, abundance creates leisure, and eventually everyone benefits from the rising tide. It is a coherent argument. It also happens to be precisely the argument that benefits the people making it. And it has been made, with equal confidence, at every previous wave of technological displacement – each time predicting a future of shared prosperity, each time delivering something considerably more complicated for the people on the wrong side of the transition.
We are currently running that experiment again, at speed, and at scale. The ethical frameworks being produced do not, in the main, ask about redistribution. They ask about transparency. They ask about bias. They ask about whether you told your students you used a chatbot. They do not ask who owns the gains when your institution decides it no longer needs as many people as it did last year.
That question is not outside the scope of ethics. It is ethics. The fact that it remains largely absent from the mainstream AI ethics discourse is, in itself, a data point worth sitting with.
Key Sources & References
O’Sullivan, J. (2025). The Case for AI Illiteracy. [publication TBC]
Digitaalinen itsenäisyys — kansalaisaloite.fi/fi/aloite/16691
EU AI Act, Article 3 (56) — AI literacy definition.
Oldemburgo de Mello needs a full reference: Oldemburgo de Mello, V., Côté, É., Ayad, R., Inbar, Y., Plaks, J., & Inzlicht, M. (2026). The moralization of artificial intelligence. Manuscript under review.
Opetus- ja kulttuuriministeriö (2026). Kansalaisten tekoälyosaamisen viitekehys. OKM. [CC BY-NC 4.0]
Vallor, S. (2024). The AI Mirror. Oxford University Press.
Veatch, V. (dir.) (2026). Ghost in the Machine. Sundance Film Festival.
Images, by Sasa Tkalcan (no GenAI)
Text, sparred concepts with Claude
