About AI-Assisted Articles
25 feb 2025
Mark Gibson
,
UK
Health Communication Specialist
A few weeks ago, I wrote a piece that criticised AI-generated content. In particular, I was referring to articles that were generated and presented as human-written articles, with the poster passing off the artificially created content as their own expertise. It was partly tongue-in-cheek, but partly a true sentiment: I believe that blindly forwarded articles and AI-generated text is a source of immense digital pollution.
My suspicion was confirmed when I fed some of the fake articles I had in mind into AI detector apps. Lo and behold, I was correct: the app detected all five articles being 100% AI-generated. I felt vindicated.
Then, I started experimenting with a couple of my recent articles on Patient Engagement. I was stunned to see that some of them detected AI generation in my articles. How is this possible when I know that I had written them? Amongst the text identified as AI-generated were turns of phrase like ‘prism of privilege’ and ‘service refugee companies’ that were definitely mine. I tried other apps and the same article received wildly different results:
Quillbot: article was 100% human;
Grammarly: 50% AI-generated;
Justdone: 13% AI-generated;
Phrasly: 77% AI-generated;
Regardless of this inconsistency, any detection of AI content in my articles was alarming to me. I did not purposely generate AI text. So, what was going on?
It must be due to the kind of text-related assistance I was asking ChatGPT to do. For example:
Organising notes: On any given topic I want to write about, I have notes all over the place: half handwritten (around 50 full notebooks since 2018…), half electronic and even then, the notes are not ordered. It is like a very boring jigsaw. I’ve been using ChatGPT to impose a thematic organisation to these notes – putting the pieces of the jigsaw together. This new presentation of the themes I use as scaffolding to write. These are my notes from references I have read, with my own two eyes, and written by hand or typed up by me, but then put into an organisation by ChatGPT. It is a wonderful tool for that.
Division into smaller articles: Once I start on a theme, say the Patient Voice, I write it as one long essay, think London Review of Books, and then cut it down into smaller articles. ChatGPT is great for identifying where one smaller article could end and a following one could begin. Occasionally, it imposes its own short introduction and conclusion to separate each sub-article.
I am well-known for repeating myself, whether in spoken and in written discourse. I make one point in a sentence and then follow up with another sentence stating the exact same point, but in different words. To stop myself from doing this - because I realise it is an irritating trait – I ask ChatGPT to cut down the word count sometimes by up to 2-300 words and identify repetitions, or to summarise key points into a table or bullet points.
My titles are always quite boring. I write what-it-says-on-the-tin, kind of title, such as ‘About AI Assistance’. I have often asked ChatGPT for suggestions of titles by feeding in the entire text of the article and then asking it to come up with a better title than the unimaginative one I thought up. I also sometimes feed in the notes for the article and ask it to write the first sentence, so that it kick starts my thinking. This is often the frustrating place where the writer procrastinates.
All of this is AI assistance, rather than AI generation. However, I realise that, by doing tasks like reorganising notes, cutting down on text and suggesting wording for titles, introductions, and so on, it goes through ‘AI treatment’; A Large-Language Model like ChatGPT makes other subtle changes. It imposes its style on the wording, bringing in words like ‘leverage’ and ‘fostering’ that were never in the original. It makes changes to the sentence structure, making longer sentences of multiple clauses. It imposes punctuation conventions like the ‘m-dash’ and so on that were not in the text I gave it. If you are asking ChatGPT to lose a few hundred words from your article or summarise text into a tablet, then your work essentially becomes co-authored. The same applies to asking the tool to break up a large article into smaller ones.
Therefore, I posted co-authored (human-AI) - rather than sole-authored (human) - articles. Lesson learnt. I will not be doing any of that again.
I also think that the AI Detector tools vary in how much you can trust them. I put an article I wrote in 2019 through one of them: It decided it was 30% AI-Generated. Then, I gave it the first page of a book chapter I wrote in 2002: this came back 35% AI-Generated. Perhaps it is my writing style: the AI Detector may have found it hard to distinguish my efforts from the tedious AI-generated output that is the reading equivalent to listening to the buzzing of a fridge.
There’s a twist to this story, though, that is worth telling. We have been developing a multilingual corpus of glossaries to do with health promotion topics. We have also developed ways to generate package leaflets in lesser-known languages. It’s all experimental but we have already generated millions of words as part of this corpus. All of it is AI-generated with no input from any human, at least in their current drafts. I pasted some of this text into a number of AI Detectors: all of them concluded that it was 100% human. None of it was human, not even 0.5% of it.
From there, I started experimenting more: I created a glossary Spanish – German – French – English of the first few chapters of the Book of Genesis. I presented them into a 4-column table of comparison, language-by-language. I put this into an AI-detector and it said it was only 30% human. Perhaps here the app might have been onto something. Maybe it spots the divine authorship.
The question is: if AI-assistance implies that there is AI co-authorship in an article, what are the ethics about this? We do not tend to state, for example, that this month’s sales projection was authored by Janet from Accounts, alongside a formula provided by Excel. Similarly, should we be transparent and explicit about AI-assistance in writing or should it be taken-for-granted and the co-authorship goes uncredited?
Thank you for reading,
Mark Gibson
Leeds, United Kingdom, February 2025
Originally written in
English