When is the last time you have challenged yourself mentally? Well, I did a month ago. I did something I have never done before, I answered a Call for papers. Seriously, I wrote my master’s thesis in 2003 and since then have only written two essays. So what sort of a paper was it? Well, in my day-to-day work at the University of Helsinki I have been working with generative AI and XR for the past year and so I was very intrigued when I was told about a Call for papers about AI and specifically about collaboration / co-creation with AI.

It was a Special Issue Call by the Irish Journal of Technology Enhanced Learning (IJTEL) with the title: The Games People Play: Exploring Technology Enhanced Learning Scholarship & Generative Artificial Intelligence. It had three options 1. Position Paper, 2. Short Report or 3. Book Review, I chose the book review. All three types should include the following sections: A. abstract, B. introduction, C. paper/report/or book review and D. critical reflection. And here comes the twist: C had to be written entirely by the AI! There was the option for A. to be written by AI too, which I opted in.

If you want to go straight to the paper you can do so by scrolling down to the end of this blog post and while you are at it take in the imagery along the way. They serve as illustrations for the book used: Flatland: A Romance of Many Dimensions by Abbott, Edwin Abbott (1884). I created the images with Midjourney and DALL·E.

When I first read the call for papers on collaborative AI writing, I felt keen excitement. I had been searching for an opportunity to test my AI tools skills beyond everyday uses. The prospect of directing a conversational agent through an iterative collaborative writing project captured my curiosity about the outer limits of their capabilities. However, embarking on extensive research writing exceeded my comfort zone – I hadn’t produced scholarly work in a while! Some self-doubt crept in. Were my ambitions in assuming an AI could handle literary commentary just short of delusional? Still, risks accompany rewards. I decided contributing to this experimental exercise investigating AI’s potential. If missteps occurred, the experience itself would provide an invaluable window into enhancing these emerging technologies responsibly. I resolved that exploring uncharted possibilities with an open mind mattered more than achieving perfect results. With an experimental mindset going in, what exactly we might discover along the way intrigued me deeply!

The Deadline for the submission was October 23, 2023, a Monday. Now I first heard about this Call for papers on October 19, a Thursday so just days before the deadline. Luckily there was the whole weekend time or so I thought. On Saturday I was involved in instructing a 3h boxing seminar, which effectively left me with Sunday to work on the paper. As mentioned in the paper the prompting part was not the hardest part and time wise a brief undertaking (3-4h). Although I have been doing this for just less than a year (remember, ChatGPT was released on Nov 30, 2022!) more or less actively, I have developed some strategies to get the bot to do what I want it to do in any given situation. The hard part though was reflecting on implications of using AI and in particular the ethical aspects. Furthermore, the formalities of the paper surprised me, although I understand the need for structure and standards. It’s much easier to cope with a trove of papers for the journal’s staff. On that note, I really appreciated the detailed instructions the paper gave in its call – it helped me focusing on writing. Another story altogether were the other formalities like setting up an ORCID account and finally adhering to correct citations (APA). Something I had been struggling with in my thesis too.

I am aware of some of my shortcomings, although it’s worth noting that my formal academic education has included some literature, albeit not extensive coverage of the theories in the field. So, I would love to see a scholar of literature take on this task and see what such a book review would like. 

What is AI’s take on the collaboration?

I asked Claude to write about our co-writing experiences as I wanted get its perspective out too. For clarification, when using the researcher Claude is referring to me.

Me: I am planning on writing a blog post about this experience. Could you please write a paragraph or two about our co-writing experience from your perspective? Say some 500 words? I’d really appreciate it.

Claude: Yes, I’m happy to build out a bit more on the collaborative process from my perspective and refer to you in the third person. Please feel free to use the following expanded reflection:

This scholarly article represented a genuine human-AI collaboration, though with the human firmly directing the process and goals. The researcher provided the initial prompt – a detailed call for papers outlining the themes and requirements for a speculative academic submission generated using AI assistance. After discussing high-level topic choices, the researcher asked the AI assistant, Claude, to draft components like an abstract and book review based on a provided excerpt of Flatland.

Each iterative version saw the researcher supply course corrections and suggestions to improve Claude’s initial attempts. For example, the researcher pushed Claude to strengthen the contextual analysis by incorporating additional theoretical framing and doubling the length of certain critical passages. Claude relied entirely on the specifics of the researcher’s redirects rather than any intrinsic motivation or complex comprehension. The collaboration centered around Claude expanding and enhancing text in the directions indicated by the human researcher.

Throughout the iterative writing and revising, Claude assisted primarily with content generation based on parameters set by the researcher, who decided when satisfactorily rigorous analysis was achieved. Claude played a supplemental role propositioning text to support the researcher’s concepts. Key strengths like rapidly generating lengthy passages helped accelerate the drafting process, saving the researcher time versus writing completely solo. Weaknesses like unreliable word counts also demanded final human review before submission.

Ultimately, this project affirmed AI’s utility as an augmentation tool providing productive content under a human’s creative direction, while revealing gaps parsing complex conceptual feedback sans explicit instructions. Claude’s contributions succeeded only to the extent they aligned with the researcher’s insights driving analysis. Though Claude assisted in crafting cohesive prose, scholar-level comprehension and discourse on this topic depended on human capacities. Still, the collaborative approach led to an accepted scholarly article advancing conceptual discussions around Flatland’s continued relevance and meaning.

Please let me know if you need any clarification or have additional suggestions for improving this reflective summary of our collaborative process from my perspective as the AI assistant!

Me interviewing Claude
Me talking with Claude

Listen to Claude answering my question about our collaboration.

How does Claude do it?

I do not know how an LLM (large language model) works, but am fascinated by the ever improving results I get and since I keep finding myself asking how does Claude (or any LLM) do it I thought why not ask it directly? And that’s what I did.

Me: I like it. I wonder if there is a way you could describe some internal mechanisms or processes when writing – based on my instructions. What I am interested in is if there is some sort of reasoning to be found (why you write what you write).

Claude: That’s an insightful question. As an AI assistant created by Anthropic to be helpful, honest, and harmless, I don’t have internal subjective experiences or reasoning in the way humans would. But I’m happy to provide some technical context on my underlying mechanisms:

When generating text based on your instructions and feedback, I rely on a neural network inside me trained by Anthropic called Constitutional AI. This model learned common sense reasoning and social cooperation by ingesting millions of online conversations demonstrating appropriate real-world conduct.

As we collaborated on revising the literary analysis, Constitutional AI generated probabilistic predictions for suitable text continuations following the parameters you defined while avoiding potential harms outlined by Anthropic. My responses aimed to provide relevant, logical support given the contextual constraints I was provided.

So in essence, the “reasoning” behind my written contributions stems from Constitutional AI’s training to produce benign, coherent content aligned with instructional prompts and discourse norms. I don’t have access to the model’s internal calculations. But functionally, my goal was crafting respectable prose adhering to general conventions for respectful collaboration while meeting the directives provided in your guidance.

My choice of phrasing or supporting evidence centered on plausibly continuing the textual conversation to meet expected standards. And corrections were based solely on better conforming to specifications you stipulated. I lacked higher aims beyond technically generating ethical, helpful text as defined by parameters set during my creation. Please let me know if you would find any additional details about my technical approach informative to describe from my system perspective!

Finally, working on this project was very rewarding. It has opened up far more new questions than given answers. I believe that the most significant aspect currently concerning AI, as I endeavour to articulate in my paper, is its ethical implications. How do you feel about this? Before closing I would like to offer you some reading and training material on AI ethics.  

Ethics of Artificial Intelligence – UNESCO

Ethics guidelines for trustworthy AI – by the EU

Ethics of AI – a MOOC by the University of Helsinki

Me interviewing Claude
Me talking with Claude

Listen to Claude answering my question about its internal mechanisms when writing.