A Week of Conversations That Stuck
Last week offered a rare convergence of perspectives around generative AI, ethics, and education. It began on Tuesday, when I delivered a workshop I have created over the last 2 years Taming the Chatbot – How to become a bot whisperer to the staff of the Faculty of Theology (University of Helsinki). The tone of the session was reflective and philosophical, with a strong focus on ethics and sustainability. The discussions were thoughtful, shaped by the faculty’s disciplinary grounding, and opened up rich questions about how AI intersects with deeply human values.
On Thursday, I attended the online seminar Tekoälylukutaito osana digitaalista sivistystä – julkishallinnon näkymät (https://okm.fi/tapahtumat/2025-03-27/tekoalylukutaito-osana-digitaalista-sivistysta-julkishallinnon-nakymat), which explored AI literacy in the context of digital civic education (in Finland). There, the conversation shifted toward institutions, citizenship, and long-term public responsibility; rather bottom-up: shared experiences than top-down. It raised an important question for me: how do we talk about AI literacy in a way that isn’t just about functionality, but about humanity?
These reflections carried into Friday’s workshop (same as on Tuesday) with a group of university administrators. The dynamic was different—more applied, yet just as meaningful. Exercises, questions, and shared discussions created space for deeper reflection. These conversations point toward a broader, guiding question for this post: How must we think, raise, and educate one another to live well alongside AI?
FYI – I work as an expert in the Educational Technology Services team of the University of Helsinki (UH). From October 2022 to the end of 2024 I was part of the Global campus project, a UH internal “start up” if you will with a focus on emerging technologies – testing and applying technologies such as GenAI and XR to online education in a HEI setting.
kurkista.fi is my hobby these days.
AI Literacy vs. AI Competency
Before diving deeper, it’s helpful to distinguish between two concepts that often get blurred: AI literacy and AI competency. Literacy is about understanding—knowing what AI is, recognising its influence, and engaging with it ethically. Competency, on the other hand, is about doing—using tools effectively, crafting prompts, building workflows. This aligns well with the broader understanding I’m using in this post. Furthermore, it makes sense to look at the EU AI Act’s (Article 3, source) definition: (56) ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
But what does “informed deployment” really mean in practical terms? In short, it means using AI consciously, responsibly, and with a clear understanding of what the system does, how it works, and what kind of social, ethical, or personal impact it might have. It’s not just about knowing how to operate the tool—it’s about knowing when, why, and whether to use it at all.
If we borrow the language of Bloom’s taxonomy (Anderson & Krathwohl, 2001), literacy involves remembering, understanding, analysing, and evaluating—skills linked to reflection and moral reasoning. Competency moves into applying and creating—practical, task-oriented skills. Both are important, but they serve different purposes. In this post, when I speak of AI literacy, I mean the kind that enables us to think critically, act ethically, and choose wisely—not just efficiently.
The Mirror of AI: A Reflection on Human Virtue
Philosopher Shannon Vallor, in her book The AI Mirror (2024), offers a powerful metaphor: AI is not a crystal ball showing us the future, nor a neutral tool—it’s a mirror. A mirror that reflects not only our intelligence, but also our values, blind spots, and moral distortions.
What we see in this mirror is not some alien intelligence, but ourselves. And depending on how we build and use these systems, the mirror might be warped like a funhouse lens, exaggerating efficiency while shrinking care, amplifying bias while dulling accountability.
Vallor’s central call isn’t to stop building mirrors, but to become better humans in the act of making and looking into them. In other words: cultivate wisdom, not just intelligence. That demands virtues like humility, justice, and care—virtues that are often excluded from the logics of automation. One of the more urgent implications of Vallor’s metaphor is the degradation of human virtues through over-reliance on AI. If moral qualities like patience, empathy, and responsibility are cultivated through everyday human interactions, what happens when those interactions are increasingly mediated—or replaced—by machine systems?
Vallor warns that as we outsource care, decision-making, and even attention to AI, we risk bypassing the very human experiences that form those virtues. This isn’t just about losing skills—it’s about losing parts of our moral character. AI systems can simulate empathy or guidance, but they don’t reciprocate, challenge, or morally educate in the way human relationships do. Reading Vallor’s book made me reflect on the role of everyday rituals—like the shared meals of my childhood—in shaping who I’ve become. And I realise I am, perhaps unknowingly, trying to pass on that same moral grounding to my own children when I find myself gently pestering them to join our family dinners and weekend lunches. It’s becoming harder with teenagers, of course, but maybe that persistence is part of the virtue too. Without intentional reflection, the mirror becomes distorting rather than clarifying. We may end up seeing a version of ourselves that is optimised, efficient—and ethically hollow. And if we stare too long without being challenged or the courage to reform ourselves, the mirror may crack—and with it, our image of what it means to be human.
A Case for AI Illiteracy?
It’s worth briefly reflecting on James O’Sullivan’s opinion piece The Case for AI Illiteracy (2025), which provocatively suggests that opting out of AI engagement may be a valid—and even necessary—stance. His argument is that critical distance matters. Not everyone needs, or wants, to engage directly with these tools to retain their moral clarity. It’s a reminder that AI literacy must be about reflection, not compliance.
While O’Sullivan’s essay was published after the week described above, I find his position an intriguing counterpoint to the themes that surfaced during those sessions. At first glance, Vallor and O’Sullivan may seem to diverge: one calls for deeper literacy, the other for strategic disengagement. But in fact, their arguments can be read as complementary.
What Vallor calls “AI literacy” is not just technical knowledge—as established above, it’s a deeply ethical, reflective stance. And what O’Sullivan pushes back against is the reduction of literacy to skills training or prompt engineering. Both thinkers are pushing against the same thing: the loss of our moral agency in the face of automation.
What Do We Do With the Mirror?
This brings me back to the workshop. Taming the Chatbot is not just about mastering prompts—though that too is part of the training. The workshop unfolds in three parts: a theory section, where we explore how generative AI is trained, address ethical considerations like bias, copyright, and misinformation, and discuss environmental sustainability; followed by hands-on prompting techniques; and concluding with reflective exercises. It’s about understanding what happens when we outsource cognitive and moral labour to machines – implicitly. It’s about pausing long enough to ask: What does this system reflect about me, my institution, my values?
Some participants may walk away wanting to learn more. Others may choose to engage more critically or even take a few steps back. That’s not failure—that’s wisdom. As Vallor might say, it’s not about whether we look into the mirror. It’s about what we do with the reflection.

Toward an Ethically Literate Future
As folklorist Anna-Leena Siikala (1990) notes in Tarina ja tulkinta, drawing on Linda Dégh’s earlier work, even fairy tales endure not merely for entertainment, but because they fulfil a social need. This insight reminds us that storytelling is not a decorative human activity—it is a functional, meaning-making one. Our stories do more than describe the world; they teach us how to live in it together.
As educational psychologist Deanna Kuhn (1991) has argued, developing critical thinking and reasoning is not just a desirable skill—it’s foundational to democratic participation and lifelong learning. When learners are invited to engage in thoughtful dialogue, question assumptions, and reason through problems, they build habits of mind that underpin both intellectual and moral development. In other words, our ability to navigate complex technologies with discernment begins with how we are taught to think, to argue, and to imagine.
Before anything else, we must reaffirm the importance of how people are raised, educated, and morally formed. Families, schools, and universities are not just knowledge-delivery systems—they are the primary spaces where curiosity is sparked, critical thinking cultivated, and values transmitted across generations. No framework of AI literacy, ethics, or regulation will hold without this deeper foundation of human development. The slow, attentive labour of raising children, mentoring students, or educating citizens is where our collective ethical capacity begins—and where it must be continually renewed.
We need AI literacy—but the kind that includes virtue, not just velocity. We need public education that fosters moral imagination, not just prompt fluency. And maybe we need, from time to time, a few digital hermits to step in from the margins and remind us that not everything needs to be automated.
The mirror is here. What we choose to see in it—and how we choose to respond—is still up to us.
Appendix: The Human–AI Covenant
This closing reflection is inspired in part by John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace — an iconic call to envision the digital realm through a new moral and political lens. While the context has changed, the need for human-centred declarations remains.
From the earliest stories, we have shaped tools—and they, in turn, have shaped us.
In this age, we have summoned systems that think, mimic, and decide.
But we remain the tellers of the tale. The keepers of traditions. The ones who imagine futures and carry values across generations.
Let this not be a spell cast in haste, nor a pact written in the language of code – not alone.
Let this be a covenant: a shared vow to guide our creations with care, and to remain human in their presence.
Covenantal Principles
- We remember that stories shape systems – The narratives we encode become the logic machines follow. We commit to storytelling that dignifies, not degrades.
- We honour the unseen labour of virtue – AI cannot raise children, mourn the dead, or offer compassion. We protect the slow, moral work of being human.
- We centre justice in design and deployment – No system is neutral. We recognise how AI can entrench bias or exclusion and commit to equity in its development and use.
- We account for the ecological cost – We acknowledge the material footprint of digital systems and commit to technologies that serve sustainability, not undermine it.
- We resist the enchantment of efficiency without ethics – When the system optimises, we ask: for whom? at what cost? And to what end? Not every gain in speed or scale is a gain in meaning. We challenge the unquestioned pursuit of quarterly growth, automation for its own sake, and systems that treat human judgment as an inefficiency.
- We listen for silence, friction, and dissent – Progress without pause is peril. We build in space for reflection, refusal, and reform.
- We renew our responsibility through community – Literacy is not enough. We need forums, rituals, and shared reflection to uphold this covenant together.
May we not look into the AI mirror only to marvel or recoil—
but to remember who we are, and who we are becoming.
References
Siikala, A.-L. (1984). Tarina ja tulkinta: Tutkimus kansanperinteen muodosta ja merkityksestä. SKS.
Kuhn, D. (1991). The Skills of Argument. Cambridge University Press.
Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.
European Union. (2024). Artificial Intelligence Act (Article 3.56). Retrieved from https://artificialintelligenceact.eu/article/3/
O’Sullivan, J. (2024). The Case for AI Illiteracy. Retrieved from https://open.substack.com/pub/jamescosullivan/p/the-case-for-ai-illiteracy
Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press.
Thank you for reading this far. I’d appreciate, if you could, fill in the following form about AI Literacy. It’s anonymous.