top of page
Writer's pictureClaire Bentley

GENERATIVE AI FOR WRITERS: WHERE DO WE GO FROM HERE?

Updated: Oct 25, 2023

Whether we’re pro-AI, anti-AI, or somewhere in the middle: we cannot ignore the robot in the room any longer.


This blog post is my attempt to acknowledge the rapid technological developments in our field, work through my own (mixed) feelings about generative AI, and help me work out my personal and professional stance on AI going forward (and hopefully help others do the same).


Happy and sad theatre faces drawn as black and white robotic faces
Friend or foe? The answer isn't black and white

To be absolutely clear: I am not a technology expert. I am a busy and overwhelmed writer and editor trying to grapple with new and potentially industry-shaking developments. It should also be noted that AI developments move so quickly that this blog post will be out-of-date as soon as I post it!


I always try to keep a balanced and open mind in all areas of life. I don’t condone abusive behaviour towards any party involved, no matter where they sit in the debate.


There is a lot of information out there, but I highly recommend Joanna Penn’s podcast as a great starting point (she is pro-AI and has used generative AI in her fiction). Although I disagree with her on some things, I respect her position and appreciate the amount of time she devotes to helping writers navigate the changes. Her episodes are well worth listening to if you want to find out more (regardless of your stance).


As I hope will become clear, it is up to each of us to decide how we will go forward, and that is a very personal decision.


WHAT IS GENERATIVE AI?

First of all, it should be stated that AI has been around (in various forms) for years, and is already used by the big tech companies (e.g. Amazon search, Siri etc). AI, as a whole, is already here and already in use.


To be clear, when I use the term ‘generative AI’ I am referring to ‘deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on’. (https://research.ibm.com/blog/what-is-generative-AI)


Developments in generative AI have exploded in the last few years. Suddenly, as writers, we have access to AI models which can generate new images for book covers and promotional materials (e.g. Midjourney), audio-narrate our books (e.g. through Google Play), and even, potentially, write our stories for us (e.g. Sudowrite).


These options did not exist a few years ago. Industry experts hypothesised about the future impact of AI on the writing field, but now, seemingly out of nowhere, the technologies have arrived. They are forcing us to grapple with our feelings about it, and even to reconsider what it means to be a ‘human’ writer.


For the purposes of this discussion I want to focus on generative AI as used by writers, such as ChatGPT and Sudowrite (which uses ChatGPT and in-house narrative models).


However, many of the points are applicable to other types of generative AI.

I find the easiest way to tackle the subject is to look at some broad overall considerations, especially the ones which are most migraine-inducing for ‘ordinary’ writers and editors (including me!) This will include an examination of some potential benefits and some potential downsides of the technology.


OUR UNDERSTANDING OF THE WAY GENERATIVE AI WORKS

Again, I’m not technologically-minded and I don’t pretend to know exactly how generative AI’s for writing work. However, in (very) layman’s terms, they are ‘trained’ using masses of written text and input from humans as to ‘appropriate responses’. Many of the models designed for creative writing (e.g. Sudowrite) use ChatGPT models as their foundation, with additions of their own.


From a user’s point of view, you input a question or prompt (or series of prompts) into the program and the model generates text in response. There are free trials of ChatGPT and Sudowrite available if you wish to ‘play around’ and see how these work.

A vending machine and a person's finger pushing a button on it
You don't push a button and end up with a beautifully-written novel

In theory, programs such as Sudowrite can generate passages of fiction in response to detailed and carefully considered prompts. However, it is worth noting that it isn’t a case of inputting a couple of prompts and out pops a beautifully written and coherent 90,000-word novel. It is not a vending machine. Anyone who treats it as such (i.e. inputting a few prompts and uploading the resultant literary vomit straight to KDP) is, in my opinion, not a ‘real writer’. Honestly, these ‘writers’ don’t worry me too much, because readers are not stupid (although it remains to be seen how Amazon and others will deal with this problem).

The indie publishing market shows us that, as much as people worry about the millions of books on the market, most ‘poor quality’ ones do not do well and quickly sink to the bottom of the pile (unfortunately, so do many ‘high quality’ ones). It is difficult enough to stand out in the fiction industry if you have a good quality book, so I don’t give much thought to those putting out poor quality and expecting to make millions. If that’s what they want to do, they’re in the wrong industry!


In my understanding, those writers who wish to use generative AI as part of their creative process still need to invest many hours of work and thought into doing so. Coming up with and generating prompts which get the writer anywhere near where they want to be is a skill in itself, and (from my understanding) these writers are not generating an entire novel over a weekend. There are many rounds of experimentation and iteration, with significant written and structural input from the writer themselves, and with the usual rounds of editing and refining the story and the text.


AIDING THE WRITING PROCESS?

There’s no denying that the writing life is tough! Those of us who write fiction know how many hours go into conceptualising, crafting and honing our stories. No matter how much experience an individual writer has, each project is different, and each one creates challenges, plotholes, complexities, setbacks, and a myriad of other hurdles.


Generative AI for writers is marketed as an aid: a writing companion to help us with the difficult process of creative writing, especially if there is an aspect of writing we struggle with, or when we feel stuck. Some writers don’t feel comfortable writing alongside AI but happily use it as a marketing aid. It is not all or nothing, and writers can pick and choose which elements of writing and their wider business they use generative AI to help with.



Different coloured Duplo bricks numbered 0-9 in a row
Could generative AI help writers with their process?


Is writing with a generative AI actually 'writing'? This is one of those murky grey areas which I don’t have a definitive answer for (and I don’t believe anyone else does either).

Proponents of AI argue that using ChatGPT is no different to using a thesaurus, or writing prompts, or Angela Ackerman and Becca Puglisi’s brilliant series of thesauri for writers (if you haven’t discovered these yet, you absolutely must! They’re wonderful). Writers and editors use spelling and grammar software (e.g. Pro Writing Aid) without having an existential crisis over their writing or what it is they do. Interestingly, Pro Writing Aid has announced its intention to incorporate aspects of generative AI in their suggestions, and Microsoft Word will also incorporate generative AI in the near future. So good luck to us moving forward if we reject all forms of generative AI in our process!


So where do we draw the line? Is it even possible to draw a line? Is a writer still a writer if they are AI-assisted? Or if they use writing prompts and thesauri? Or if they use a spelling and grammar checker? Honestly, I don’t know.


HELPING MARGINALISED COMMUNITIES?

One potential benefit of generative AI is in aiding marginalised writers in being able to write their fiction projects, when the ‘usual’ methods of fiction writing might be difficult or inaccessible to them. For example, a writer may have physical or psychological health issues which normally prevent them being able to write reams of text, who may suddenly find themselves able to generate large amounts of words without damaging their health. Regardless of how we feel about generative AI, there is a lot to be said for this benefit. It has the potential to help disabled and neurodiverse communities be able to participate more fully in the fiction world.


However, we have to ask the question of whether generative AI would reduce, or increase, existing inequalities in the fiction world. Generative AI is becoming available to those who are traditionally more ‘favoured’ by the industry anyway (i.e. white, male, wealthy, able-bodied etc). Will the technology help those who have restrictions, or who have less time on their hands, to catch up with the advantaged and prolific writers? Bearing in mind the ones who are already advantaged will have more time to learn how to use and prompt the models, not to mention more money to pay for the subscriptions? I’m doubtful, and I personally believe that, overall, AI will further increase industry and societal inequality.


For decades, every new technological revolution has been badged as making our lives easier, when in reality differences in digital capability and access help worsen pre-existing societal inequality and make it even easier for those ‘lower down’ on the pecking order to be downtrodden and exploited for profit.


To be honest, this is one of the concerns that worries me most about developments in AI. Does it have the potential to help solve some of the biggest crises facing humanity, such as climate change and antibiotic resistance? Yes. However, unless something drastically changes, generative AI developments will be yet another tool enabling a tiny minority to fleece the rest of us for profit and fame.


Another concern (and one which is already being seen in AI image generation) is that inherent societal biases will potentially be replicated in the output of these models. For example, if most of the instances the model is trained on show white males in positions of power, then they are statistically more likely to replicate this societal bias in their output. It may be possible to get around this using the ‘right’ prompts, but then would plagiarism be more likely if the model had fewer examples of non-white, non-male people in positions of power to draw from?


I’ll end this rant here otherwise this post will be a novel in itself! Suffice it to say: the thought of another group of ‘techbros’ making billions while most of the rest of us are exploited and struggle to make ends meet is not a prospect that fills me with sunshine and happy feelings.


HAVE COPYRIGHTED MATERIALS BEEN USED IN THE TRAINING?

This seems like a good time to address what is, in my opinion, one of the biggest ethical concerns with generative AI models. There is little transparency regarding how these models make decisions, how they have been trained, and whether copyrighted materials have been used to train them. The developers are keeping quiet about this, but it is likely that writers’ copyrighted materials have been used to train the models. For example, Sudowrite's own FAQs state that the program can be made to plagiarise by inputting text it has seen before verbatim, e.g. Harry Potter. The creators of Sudowrite discourage plagiarism, but how would this be possible if the Harry Potter series had not been ‘scraped'?


Duplo firefighters: one threatening the other with an axe
Are writers being exploited?

Not that I’m concerned about the income of She Who Must Not Be Named, but what about ‘smaller’ writers who struggle to make an income from their writing and have not made a penny from this process? Is it ethical to use their copyrighted writing, that they worked hard to create, and use it to inform the development of generative AI tools which are then used to make someone else a profit? Personally I don’t think it is, especially if these models turn the industry upside-down and make it even more difficult for these writers to make a living from their craft. The irony is eye-watering.


Fan fiction writers aren’t allowed to charge for their work because it’s based on the intellectual property of other writers. If fan fiction writers aren’t allowed to profit from their work because of copyright infringement, then why are companies who produce generative AIs allowed to profit from others’ IP?


This practice does not sit well with me at all, and this is one of my biggest objections to the technology. For me, this is a bigger turn-off than the whole ‘are people who use generative AIs real writers’ question. However, when it comes to regulating how these models are trained, it looks as though the digital horse has bolted (which is nothing new in the fast-paced technological world).


If the text outputted by a generative AI is substantially different from the source material used to train it then technically it isn’t in breach of copyright. Technically, what the models are doing is not illegal (although, again, it is morally dubious). Big companies (e.g. Microsoft) are continuing to invest in these technologies and include them within their own software packages despite these ethical questions. So will anything be done to address this?


DO HUMANS WORK IN THE SAME WAY?

Proponents of generative AI argue that humans work in the same way as these models. We, as writers and editors, are all consciously and subconsciously influenced by writing and artwork which came before us. We read, we watch TV, and we consume other people’s creative work. Whether we intend it or not, the writing and art we encounter in our lives influences the way we generate our own writing.


Yes, this is true. However, it took me a while to pinpoint exactly what was bothering me about this argument. Then it came to me. First of all, the creators who inspired our own work are (usually) compensated in some way for that. We buy books. We borrow books from the library. We pay gallery and museum fees to view artwork. We pay for TV subscriptions. We tell our friends about writing and other forms of art we enjoyed, who in turn discover it and increase exposure and compensation for the original creator. We take time to savour and enjoy the artwork we consume.


Compare this to data ‘scraped’ from the internet to train a generative AI model. The original work is harvested and not savoured or enjoyed. The original work’s reach is not increased: instead it is hidden in a database alongside many other pieces of creative work, where its influence is invisible and unacknowledged. The original creator is not compensated.


This is the difference. And, for me personally, this difference matters.


If, in the future, a generative AI model was created which licensed the original works and fairly compensated the original creators, then I would consider using it. It would be a way to increase exposure and compensation for the original creators. It could even help with the problem of addressing marginalisations in the fiction world, by paying marginalised writers to include their work and reduce the inherent biases in the output. More of the wealth would be shared with the humans who make this industry succeed in the first place.


WHO OWNS THE COPYRIGHT OF GENERATED MATERIALS?

As things stand, AI-generated work cannot be copyrighted unless there has been substantial human input in its development. What is the definition of substantial human input? This is unclear, and it may be that different countries decide to go with different definitions.


For writers who wish to write alongside AI, you will need to document your process so that you can show your (human) input.


Although there are issues here too, personally I feel this is less of a concern than copyright questions around what went into the model in the first place.


To all creators: if you use generative AI to help produce your creative work, please please please acknowledge it as co-written with AI. We need transparency, so that writers and readers know how the work was created and can make informed decisions about their consumption. If the work is of high quality then it may not matter to readers whether or not AI was a 'creative' collaborator on the project.


THE COST

I’ll use Sudowrite as an example, as cost is a consideration and also relates to the issue I discussed earlier regarding the impact of generative AI on writers’ livelihoods (especially marginalised writers).


You can access a free trial for Sudowrite, and you can also sign up to use ChatGPT 3.5 for free (ChatGPT 4 needs a subscription).


However, if we look at the paid subscriptions for Sudowrite, the most expensive option is 100 USD per month. I can’t speak for the rest of the world, but here in the UK we are embroiled in a cost-of-living crisis which is causing financial problems for the vast majority of us. Mortgage rates, energy bills, food prices, petrol prices… the cost of everything has skyrocketed and most of us are tightening our belts. I’ll resist getting into a long political rant about the reasons behind this…


Suffice it to say that, with the current position of my writing and editing business, and with two young children to raise, this is a cost that I currently cannot afford. Arguably the 25USD option is more affordable, but how quickly would you burn through 90,000 words in a month when co-writing with AI (with the trial and error that process would involve)? However I do like that users have the option to easily pause or cancel subscriptions.


For most writers this isn’t a high-paying profession, so once again we’re faced with a situation where only those who are already wealthy and / or already successful will realistically be able to use up-to-date generative AI models to assist with their writing. If the supposed gains in terms of efficiency turn out to be true, then only those who can afford to will be able to benefit.


ARE WE FOCUSING ON THE RIGHT THING?

I don’t know about you, but when I imagined the future of AI development I imagined AI being used to help free humans from the burden of beneficial but repetitive, soul-draining tasks. I imagined it being used for the types of large-scale complex data analyses which are impossible for the human mind to carry out (and to some extent it is being used in this way). If AI became sentient in the future then we could revisit this, but on the whole I thought the general idea was to make life easier and more enjoyable for humans!


What I did not imagine was that AI would be used to carry out creative tasks like consuming and generating artwork and writing. In other words, the types of tasks humans actually enjoy doing and would love to have more free time to do and enjoy. But no. Greed and capitalism once again win over individual humans’ quality of life.


I suppose I should have known. When the pace of technological development began ramping up last century people hypothesised that it would be beneficial for humankind. Technology would increase our efficiency, and thus free up more of our time for personal enjoyment and fulfilment.


Of course, that isn’t what happened. Human greed won out. Technology and the efficiency gains they create are instead used to further exploit frontline workers in all sectors. Working hours creep outside their contracted scope. Employees are contactable at any time of day or night thanks to email. And yet many companies still force employees to commute to a physical workplace each day (can’t have technology making it easier to work from home and enjoy better quality of life, oh no).


Exhausted cartoon car with flat tyres, scratches, and dark smoke billowing out of the back
Why can't AI take over the work we don't want to do?

You may sense that I’m angry about the direction AI has taken! I say this as a mother of young children who is trying to run a household and build a writing and editing business in whatever snippets of time I can carve out between everything else. I would love an AI to help with cleaning and maintaining the house, and food shopping, and cooking etc. This would free up so much of my time, and so much of other peoples’ time (read: women and other marginalised groups) to build businesses, to learn, to create, and to enjoy time with their families.


But alas. Writing is the ‘easy win’ for technology professionals who likely have other people (read women and other marginalised people) taking care of all that stuff for them so they can exploit struggling creatives instead of actually helping to ‘free’ large sections of society. As a woman, I’m used to female roles and ‘female work’ being undervalued and underappreciated (and definitely underpaid). I suppose this is a big part of why this makes me angry, and I’m struggling to keep my own emotions and experiences out of this part of the discussion (oops!)


SO HOW DO WE MOVE FORWARD? WHERE DO WE GO FROM HERE?

Many publishers, literary magazines and writing competitions are already stating they will not accept writing produced or assisted by generative AI.


Will they change their mind in the future? Will we see traditional publishing as the route for ‘fully human’ writers, and indie as the route for AI-assisted writers? Will the traditional publishing world also start using AI-assisted writers and freelancers? Most are resisting for now, but how much longer will that continue?


I’m not here to say what will happen. None of us know exactly how this technology will work out, or whether (and how) it will reshape the creative writing landscape.


MY OWN STANCE?

This is by no means an exhaustive list of all the concerns and considerations around generative AI for writing, and if you want more up-to-date information on the legal and technical aspects of AI then I suggest you look for other sources. However, these are the issues which stand out to me, as a non-technical, non-wealthy, ‘small’ creator who is facing a massive shake-up of the industry where I make my name and my living.


As you can probably tell, my feelings about using generative AI for fiction are extremely mixed. I’ve tried my best to be balanced and impartial, but for some of the issues that is impossible for me. I maintain that I don’t condone abuse towards individuals, but that doesn’t mean I don’t have some strong opinions about the technology and how it is being developed and used (at least as it currently stands).


Regardless of my feelings, I believe the digital cat is now out of the bag. Hopefully governments and relevant departments and professions are waking up to these developments, and hopefully they will try to reign in and regulate this rapidly-changing field (e.g. revisiting copyright laws, requiring disclosure when AI is used etc). Also, we, as writers and other related professionals, need to stop hiding from this. We need to try and get to grips with it, to understand it as best we can, to help inform legislation and guidelines for its use, and decide for ourselves whether (and how) we will include generative AI within our own business models. Government departments (and writers) cannot afford to leave it up to the ‘techbros’ to decide how generative AIs are regulated and used.


Personally? I do NOT intend to use generative AI for brainstorming or writing my fiction, or to generate blog posts. I use spelling and grammar checks in my everyday writing and editing as a backup for my brain, and I will continue to do so. For me, this is where I’m comfortable drawing the line with regards to how much I include generative AI within my own business. I can’t fully explain why, but having help in conforming to the ‘rules’ of a language feels very different to having help with the overall story, or with the writing or development of that story.


From an editing perspective, I’ve always enjoyed developmental and copyediting more than proofreading, and these are areas where (at least currently) I feel a human brain is superior to an AI one (e.g. character development, story progression, cohesion etc). It may be that I focus my business on these areas of fiction editing: I enjoy them more anyway, and I know I can do a better job at these than a (current) AI model can.


Basically, I will make my humanity part of my brand moving forward. I have already added a disclaimer to my blog stating what I do (and don’t) use generative AI for, making it clear that I don’t consent to my intellectual property being used to train generative AI models, and that I can be contacted to discuss licensing of my work for this purpose. I’m not a lawyer, but I feel better for stating my position and intentions, and I know how I will approach these changes going forward.


I’m not against generative AIs per se, and I reserve the right to change my mind and adjust my business model in the future. For example, if an ethical model was developed which fairly compensated creators and tried to address societal biases in its training then I might reconsider my stance. However, for now, the ethical issues surrounding this technology mean that I will be emphasising my humanity as part of my writing and editing brand.


CONCLUSION

It is important to understand that these developments are very new and potentially disruptive, and the picture isn’t as black and white, as all or nothing, as some would like to believe. There are lots of grey areas and unanswered questions.


I haven’t even had time to get into the use of AI in other aspects of our fiction businesses and the decisions we must make here too (book covers, audio etc), but I urge you to consider these aspects. Yes, cost is a consideration, but so too are speed and quality of output, and especially the potential impact to human creators in these fields.


One thing is becoming clear: we, as writers and editors, can no longer hide from this. We need to find out as much as we can, and make active and informed decisions about whether, how and to what extent we may (or may not) include generative AI in our future business.


Each of our writing careers is a business, whether we think of it that way or not. Like any other business, we need to examine the coming changes and make active, informed decisions about how we will deal with those changes. While AI and the rules and regulations around its use may shift and evolve, the one thing we cannot do is bury our heads in the sand and hope this will go away.


Generative AI is here. It is up to each of us to decide how we will move forward, in the way that works best for us personally.


BEFORE YOU GO…

Do you have any thoughts on generative AI and / or the issues discussed above? Please join in the discussion (contact details below).


Please feel free to comment on the article and/or contact me if you have any questions:


Socials: @cbentleywriter on most of them!




Buy me a coffee: https://ko-fi.com/clairebentley

I welcome respectful and friendly discussion on the topics I write about, including if your opinion differs from my own.


Disclaimer: generative AI

I do not use generative AI to produce or inform my blog, my images, or my fiction. All of my content is generated by the chaotic firing of my own (human) brain! (I have access to some images through my Wix subscription).

I do not consent to the use of my content, images, or fiction to train generative AI models. Please contact me to discuss permission and compensation if you wish to use my content in this way.

Commentaires


bottom of page