top of page

GENERATIVE AI FOR WRITERS: WHERE DO WE GO FROM HERE?

  • Writer: Claire Bentley
    Claire Bentley
  • Jul 4, 2023
  • 20 min read

Updated: Aug 13

Whether we're pro-AI, anti-AI, or somewhere in the middle: we cannot ignore the bot in the room any longer.


I always try to keep a balanced and open mind in all areas of life. This blog post is my attempt to acknowledge the rapid technological developments in the writing and editing field, work through my own (mixed) feelings about generative AI, and to solidify my professional stance on AI going forward (and hopefully help others do the same).


Happy and sad theatre faces drawn as black and white robotic faces
Friend or foe? The answer isn't black and white

To be absolutely clear: I am not a technology expert. I am a busy and overwhelmed writer and editor trying to grapple with new and potentially industry-shaking developments. It should also be noted that AI developments move so quickly that this blog post will be out-of-date as soon as I post it!


There is a lot of information out there, but I highly recommend Joanna Penn’s podcast as a great starting point. To be clear: she is pro-AI and has used generative AI in her fiction and marketing. Although I disagree with her on some things, she has spent years helping other authors navigate the writing and publishing process, and I respect her work and her opinions. Her episodes are well worth listening to if you want to find out more – regardless of your stance.


To be clear: I do not condone abusive behaviour towards people based on their AI stance. As I hope will become clear, it is up to each of us to decide how we will go forward, and that is a very personal decision.


WHAT IS GENERATIVE AI?

First of all, it should be stated that AI has been around (in various forms) for years, and is already used by tech companies (e.g. Amazon search, Siri etc). AI – as a whole – is already here and already in use.


To be clear, when I use the term ‘generative AI’ or 'gen-AI', I am referring to ‘deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on’. (https://research.ibm.com/blog/what-is-generative-AI)


Developments in generative AI have exploded in the last few years. Suddenly, we have access to AI models which can generate new images for book covers and promotional materials (e.g. Midjourney), audio-narrate our books (e.g. through Google Play), and even – potentially – write our stories for us (e.g. Sudowrite).


These options did not exist a few years ago. Instead of gradual changes, these technologies have crashed into multiple industries – including the book industry – and turned everything upside down before we could even catch our collective breath. Regardless of how we feel about gen-AI, this technology is forcing us to grapple with our feelings about it, and even to reconsider what it means to be a ‘human’ writer.


Gen-AI is being used in many fields, and it does have the potential to carry out beneficial tasks that no human researcher would have the time or resources to do (e.g. speeding up the discovery of new drugs). However, for the purposes of this discussion, I am focusing on generative AI within the fiction-writing industry.


This blog post looks at some broad considerations for use of gen-AI in the writing, editing and book marketing fields. I first published this post in July 2023, and I updated it in August 2025 to take into account my most recent research and feelings on the subject.


HOW GENERATIVE AI WORKS

Again, I’m not technology-minded, and I don’t pretend to know exactly how generative AIs work. However, in (very) layman’s terms, they are ‘trained’ using masses of data (e.g. written text, images etc) alongside input from humans as to ‘appropriate responses’. Many of the models designed for creative writing (e.g. Sudowrite) use ChatGPT models as their foundation, with additions of their own.


From a user’s point of view, you input a question or prompt – or series of prompts – into the program, and the model generates an output in response. There are free versions and free trials of some of these models (e.g. ChatGPT) if you wish to ‘play around’ and see how they work.

A vending machine and a person's finger pushing a button on it
You don't push a button and end up with a beautifully-written novel

In theory, programs such as Sudowrite can generate passages of fiction in response to detailed and carefully considered prompts. However, it is worth noting that it isn’t a case of inputting a couple of prompts into the 'plagiarism program', and out pops a beautifully written and coherent 90,000-word novel. It is not a vending machine. Anyone who treats it as such – inputting a few prompts and uploading the resultant literary vomit straight to KDP – is, in my opinion, not a ‘real writer’. Honestly, these ‘writers’ don’t worry me too much, because readers are – on the whole – intelligent and discerning, and are seeking human-written stories (although it remains to be seen how Amazon and others will deal with this problem).

The indie publishing market shows us that, as much as people worry about the millions of books on the market, most ‘poor quality’ ones do not do well and quickly sink to the bottom of the pile. Unfortunately, so do many ‘high quality’ ones. It is difficult enough to stand out in the fiction industry if you have a good quality book, so I don’t give much thought to those putting out poor quality and expecting to make millions.


In my understanding, those writers who wish to use generative AI as part of their creative process still need to invest many hours of work and thought into doing so. Coming up with and generating prompts which get the writer anywhere near where they want to be is a skill in itself, and – from my understanding – these writers are not generating an entire novel over a weekend. There are many rounds of experimentation and iteration, with significant written and structural input from the writer themselves, and with the usual rounds of editing and refining the story and the text.


I sometimes wonder whether the industry will fragment into 'human-written' versus 'AI-assisted' books. For example, photography disrupted the art scene, and a camera can 'capture' an image much faster than a human hand, but photography did not eradicate drawing, painting, sculpting etc. It became its own category. And professional photographers still edit their photos!


AIDING THE WRITING PROCESS?

There’s no denying that the writing life is tough! Those of us who write fiction know how many hours go into conceptualising, crafting and honing our stories. No matter how much experience an individual writer has, each project is different, and each one creates challenges, plot holes, complexities, setbacks, and a myriad of other hurdles.


Generative AI for writers is marketed as an aid: a writing companion to help us with the difficult process of creative writing, especially if there is an aspect of writing we struggle with, or when we feel stuck. Some writers do not wish to write with gen-AI but will happily use it as a marketing aid. It is not all- or-nothing, and writers can pick and choose which elements of writing – and of their wider business – they use (or don't use) generative AI for.



Different coloured Duplo bricks numbered 0-9 in a row
Could generative AI help writers with their process?


Is writing with a generative AI actually 'writing'? This is one of those murky grey areas which I don’t have a definitive answer for – and I don’t believe anyone else does either.


Proponents of AI argue that using ChatGPT is no different to using a thesaurus, or using writing prompts. For example, Angela Ackerman and Becca Puglisi have a brilliant series of thesauri for writers. Writers and editors use spelling and grammar software (e.g. ProWriting Aid). Most writers are not having an existential crisis about whether using the Emotion Thesaurus means they're no longer a 'real writer'. Interestingly, ProWriting Aid now incorporates aspects of generative AI in their suggestions, and apparently can now even provide developmental feedback on stories.


So, where do we draw the line? Is it even possible to draw a line? Is a writer still a writer if they are AI-assisted? Or if they use writing thesauri? Or if they use a spelling and grammar checker? Honestly, I don’t know.


As an editor, I experimented with ProWriting Aid with my own writing to see what it could do. It's certainly useful in helping authors pick up on issues within their text, and would be a good option for an author who could not afford professional editing. However, as things currently stand, it cannot replace a human editor. As an aside, one of my clients pitted my sample edit against ProWriting Aid when deciding whether or not to book their copyedit with me. They confirmed their booking with me less than an hour later!


HELPING MARGINALISED COMMUNITIES?

One purported benefit of generative AI is in aiding marginalised writers in writing their fiction projects, when the ‘usual’ methods of fiction writing might be difficult or inaccessible to them. For example, a writer may have physical or psychological health issues which normally prevent them being able to write reams of text, who may suddenly find themselves able to generate large amounts of words without damaging their health. Regardless of how we feel about gen-AI, there is a lot to be said for this benefit if it proves accurate. It could have the potential to help disabled communities be able to participate more fully in the fiction world.


I can't speak for all disabled communities, but – speaking as an autistic person – the thought of handing control of my writing to an outside entity – whether AI or human – is horrifying to me! Yes, my autism causes problems for me in managing my life and in writing consistently, but that doesn't necessarily mean that I want to outsource parts of my work to gen-AI.


Although I do not currently use gen-AI in any part of my writing and editing business, the one area which tempts me is audio production. Companies such as Elevenlabs now allow authors to produce AI-narrated versions of their books, and even to clone their own voice for their personal use and / or to license it to others.


I am absolutely torn on this; I want my future books and my written material to be available in audio format for greater accessibility, but I don't know whether the benefit of greater accessibility outweighs the 'costs' of using gen-AI. I don't have the money to pay for a human narrator, and my autism means I don't wish to perform it myself because I often struggle with speech – especially if a camera or microphone is placed in front of me. In my case, it isn't a choice between a human narrator or an AI narrator – if it were, I would choose the human every time. In my case, the choice is between an AI-narrator or no audio book at all – unless I eventually make enough money to afford a human narrator.


It's a dilemma, and it's one I haven't solved yet.


IT'S STILL CAPITALISM...

As Chuck Wendig points out, we're still entrenched within a capitalist society. Is generative AI as inevitable as the tech gurus would like us to think, or is it yet another way for them to make even larger profits by using AI as an excuse to replace or outsource 'expensive' human creativity?


Importantly, we have to ask the question of whether generative AI would reduce – or increase – existing inequalities in the fiction world. Gen-AI is becoming available to those who are traditionally more ‘favoured’ by the industry anyway – white, male, wealthy, able-bodied etc. Will the technology help those who have restrictions, or who have less time on their hands, to catch up with the advantaged and prolific writers? Bearing in mind the ones who are already advantaged will have more time to learn how to use and prompt the models, not to mention more money to pay for the subscriptions? I’m doubtful, and I personally believe that, overall, AI will further increase industry and societal inequality.


For decades, every new technological revolution has been badged as making our lives easier, when in reality differences in digital capability and access help worsen pre-existing societal inequalities and make it even easier for those ‘lower down’ on the pecking order to be downtrodden and exploited for profit.


To be honest, this is one of the concerns that worries me most about developments in gen-AI. Does it have the potential to help solve some of the biggest crises facing humanity, such as climate change and antibiotic resistance? Yes. However, unless something drastically changes, generative AI developments will be yet another tool enabling a tiny minority to fleece the rest of us for profit.


For most writers this isn’t a high-paying profession, so once again we’re faced with a situation in which only those who are already wealthy and / or already successful will realistically be able to use up-to-date generative AI models to assist with their writing. If the supposed gains in terms of efficiency turn out to be true, then only those who can afford to will be able to benefit.


Another concern – and one which is already being seen in AI image generation – is that inherent societal biases will potentially be replicated in the output of these models. For example, if most of the instances the model is trained on show white males in positions of power, then they are statistically more likely to replicate this societal bias in their output. It may be possible to get around this by using the ‘right’ prompts, but then would plagiarism be more likely if the model had fewer examples of non-white, non-male people in positions of power to draw from?


I’ll end this rant here, otherwise this post will be a novel in itself! Suffice it to say: the thought of another group of ‘techbros’ making billions from the hard work of human artists – while most of us are exploited and struggle to make ends meet – is not a prospect that fills me with sunshine and happy feelings. Would governments introduce Universal Basic Income to offset gen-AI replacing many roles? Possibly, but – judging by recent history – my hopes aren't high.


HAVE COPYRIGHTED MATERIALS BEEN USED IN THE TRAINING?

This seems like a good time to address what is – in my opinion – one of the biggest ethical concerns with generative AI models. There is little transparency regarding how these models make decisions, how they have been trained, and whether copyrighted materials have been used to train them. It is almost certain that writers’ copyrighted materials have been used to train the models. For example, in March 2025, The Atlantic revealed that Meta used millions of pirated books and research articles to train their Llama gen-AI model. Many academic articles I worked on in my previous life as a healthcare researcher are in the database. This was done without consent or compensation to the original creators.


Sudowrite's own FAQs state that the program can be made to plagiarise by inputting text it has seen before verbatim, e.g. Harry Potter. The creators of Sudowrite discourage plagiarism, but how would this be possible if the Harry Potter series had not been ‘scraped'?


Duplo firefighters: one threatening the other with an axe
Are writers being exploited?

Not that I’m concerned about the income of She Who Must Not Be Named, but what about ‘smaller’ writers who struggle to make an income from their writing and have not made a penny from this process? Is it ethical to use their copyrighted writing – that they worked hard to create and in many cases spent a lot of money to publish – and use it to inform the development of generative AI tools which are then used to make someone else a profit? Personally I don’t think it is, especially if these models turn the industry upside-down and make it even more difficult for these writers to make a living from their craft. The irony is eye-watering.


Fan fiction writers aren’t allowed to charge for their work because it’s based on the intellectual property of other writers. If fan fiction writers aren’t allowed to profit because of copyright infringement – despite the fact that their work is arguably 'transformative' – then why are companies who produce generative AIs allowed to profit from others’ IP?


This practice does not sit well with me at all, and this is one of my biggest objections to the technology. For me, this is a bigger turn-off than the whole ‘are people who use generative AIs real writers’ question. However, when it comes to regulating how these models are trained, it looks as though the digital horse has bolted.


If the text outputted by a generative AI is substantially different from the source material used to train it then technically it has been 'transformed' and it isn’t in breach of copyright. Technically, what the models are doing is not illegal – although it is morally dubious. Big tech companies continue to invest in these technologies and include them within their own software packages despite these ethical questions. So will anything be done to address this?


DO HUMANS WORK IN THE SAME WAY?

Proponents of generative AI argue that humans work in the same way as gen-AI models. They argue that writers are consciously and subconsciously influenced by writing and artwork which came before us. We read, we watch TV, and we consume other people’s creative work. Whether we intend it or not, the writing and art we encounter in our lives influences the way we 'generate' our own writing.


Yes, this is true. However, it took me a while to pinpoint exactly what was bothering me about this argument. Then it came to me. First of all, the creators who inspired our own work are (usually) compensated in some way. We buy books. We borrow books from the library. We pay gallery and museum fees to view artwork. We pay for TV subscriptions. We tell our friends about writing and other forms of art we enjoyed, who in turn discover it, which increases exposure and compensation for the original creator. We take time to savour and enjoy the artwork we consume.


Compare this to data ‘scraped’ from the internet to train a generative AI model. The original work is harvested and not savoured or enjoyed. The original work’s reach is not increased: instead, its influence is invisible and unacknowledged. The original creator is not compensated.


This is the difference. And – for me personally – this difference matters.


If, in the future, a generative AI model was created which licensed the original works and fairly compensated the original creators, then I would consider using it. It would be a way to increase exposure and compensation for the original creators. It could even help with the problem of addressing marginalisations in the fiction world, by paying marginalised writers to include their work, and reduce the inherent biases in the output. More of the wealth would be shared with the humans who made gen-AI possible in the first place.


WHO OWNS THE COPYRIGHT?

As things stand, AI-generated work cannot be copyrighted unless there has been substantial human input in its development. What is the definition of substantial human input? This is unclear, and it may be that different countries decide to go with different definitions.


For writers who wish to write alongside AI, you need to document your process so that you can show your (human) input.


Although there are issues here too, personally I feel this is less of a concern than copyright questions around what went into the model in the first place.


To all creators: if you use generative AI to help produce your creative work, please please please acknowledge it as co-written with AI. We need transparency, so that writers and readers know how the work was created and can make informed decisions about their consumption. If the work is of high quality then it may not matter to readers whether or not AI was a 'creative' collaborator on the project.


WRITING SKILLS

This brings me to another concern, and one which I don't think we're feeling the impact of yet.


As a freelance fiction editor, I mainly work with newer writers who are still learning the craft. Each of my clients has had skills they were naturally good at, and skills which they needed a lot of help and guidance with. This is totally normal and expected, and – in my opinion – all newer writers (and established writers!) need to spend time practicing their skills, learning how to self-edit, and learning how to craft a compelling story. Writing is difficult, and it takes years to become established in the writing craft.


It's one thing if an experienced writer (e.g. Joanna Penn) works with gen-AI to craft stories. She has been writing without AI for many years, and she knows how to craft a story. She knows how to assess the 'quality' of the AI's ideas and output.


But what about a newer writer? Those who start co-writing with AI might not be able to discern different quality levels in the output, and they may not learn the skills necessary to become a skilled and competent writer in their own right. And the newer writers who decide not to use gen-AI may get forced out of the market anyway because they can't produce books quickly enough to gain traction while they continue working on their craft.


The experienced, big name authors will be okay. It's the newer and / or midlist ones I worry about. The same is true of editors, artists, audio narrators etc; will gen-AI force out the 'newer' professionals in these spaces, meaning we collectively lose those skills? This might be acceptable for some types of technology, but gen-AI is literally doing our research and our creative thinking. What happens to humanity if we lose those skills? With the rise in fascism, these are skills we cannot afford to lose.


CLIMATE IMPACT

AI data centres use large amounts of energy and water to keep functioning. They are polluting the communities around them – which are more likely to be poorer / marginalised communities.


Some argue that lots of processes use a large amount of energy and are (thus) damaging to the environment. Don't get me wrong: I think many industries need to make changes to the way they operate to better protect our planet and our future. However, would I rather energy be used to power our households, preserve our food, and provide medical care, or would I rather that energy be used to help people write emails?


Humanity cannot keep abusing the environment and continue to get away with it.


ARE WE FOCUSING ON THE RIGHT THING?

I don’t know about you, but when I imagined the future of AI development, I imagined AI being used to help free humans from the burden of beneficial but repetitive, soul-draining tasks. I thought the general idea was to make life easier and more enjoyable for people!


What I did not imagine was that AI would be used to carry out creative tasks like generating artwork and writing: in other words, the types of tasks humans actually enjoy doing and would love to have more free time for. But, no. Greed and capitalism once again win over individual humans’ quality of life.


I suppose I should have known. When the pace of technological development began ramping up last century, people hypothesised that it would be beneficial for humankind. Technology would increase our efficiency, and thus free up more of our time for personal enjoyment and fulfilment.


Of course, that isn’t what happened. Human greed won out. Technology and its efficiency gains are instead used to further exploit frontline workers in all sectors. Working hours creep outside their contracted scope because employees are technically contactable at any time of day or night. And yet, many companies still force employees to commute to a physical workplace each day despite many types of remote work now being possible. Can’t have technology making it easier to work from home and enjoy better quality of life, oh no. Our economy relies on us producing more and more and more, even though it damages the environment, much of it is wasted, and most of the profit goes straight to the top.


Exhausted cartoon car with flat tyres, scratches, and dark smoke billowing out of the back
Why can't AI take over the work we don't want to do?

You may sense that I’m angry about the direction AI has taken! I say this as a mother of young children who is trying to run a household and build a writing and editing business in whatever snippets of time I can carve out between everything else. I would love an AI to help with cleaning and maintaining the house, and food shopping, and cooking etc. This would free up so much of my time, and so much of other peoples’ time (read: women and other marginalised groups) to build businesses, to learn, to create, and to enjoy time with their families.


But alas. Writing is the ‘easy win’ for technology professionals who likely have other people (read women and other marginalised people) taking care of all that stuff for them so they can exploit struggling creatives instead of actually helping to ‘free’ large sections of society. As a woman, I’m used to female roles and ‘female work’ being undervalued and underappreciated (and definitely underpaid). I suppose this is a big part of why this makes me angry, and I’m struggling to keep my own emotions and experiences out of this part of the discussion (oops!)


SO HOW DO WE MOVE FORWARD? WHERE DO WE GO FROM HERE?

I won a Writers' College essay competition in 2025. The competition permanently closed right afterwards because of the volume of AI-generated submissions they received: despite them explicitly excluding AI-generated submissions in their instructions. So now, all authors lose out on benefitting from this competition because of some bad actors who couldn't even be bothered to read the submission guidelines before pushing a button and generating their 'essay'.


Many publishers, literary magazines and writing competitions state that they will not accept writing produced or assisted by generative AI. Will they change their mind in the future? Will we see traditional publishing as the route for ‘fully human’ writers, and indie as the route for AI-assisted writers? Will the traditional publishing world also start using AI-assisted writers and freelancers? Some publishing companies are now starting to license their IP to train gen-AI models (e.g. Harper Collins). Many of my editing clients are anti gen-AI, but will that always be the case? Many are resisting for now, but how much longer will that continue?


I’m not here to say what will happen. None of us know exactly how this technology will work out, or whether – and how – it will reshape the creative writing landscape.


MY OWN STANCE?

This is by no means an exhaustive list of all the concerns and considerations around generative AI for writing, and if you want more up-to-date information on the legal and technical aspects of AI then I suggest you look for other sources. However, these are the issues which stand out to me: as a non-technical, non-wealthy, ‘small’ creator who is facing a massive shake-up of the industry in which I (try to) make my living.


As you can probably tell, my feelings about using generative AI for fiction are extremely mixed. I’ve tried my best to be balanced and impartial, but for some of the issues that is impossible for me. I do not condone abuse towards individuals who use gen-AI, but that doesn’t mean I don’t have strong opinions about the technology and how it is being developed and used – at least as it currently stands.


Regardless of my feelings, I believe the digital cat is now out of the bag. Hopefully, governments and relevant departments and professions are waking up to these developments, and hopefully they will try to rein in and regulate this rapidly-changing field – e.g. revisiting copyright laws, requiring disclosure when AI is used etc. Also, as writers and other related professionals, we need to stop hiding from this. We need to try and get to grips with it, to understand it as best we can, to help inform legislation and guidelines for its use, and to decide for ourselves whether (and how) we will include generative AI within our own business models. Writers cannot afford to leave it up to the ‘techbros’ to decide how generative AIs are regulated and used.


Personally? I do NOT intend to use generative AI for brainstorming, or writing my fiction, or to generate blog posts, or for marketing. I use spelling and grammar checks in my everyday writing and editing as a backup for my brain, and I will continue to do so. For me, this is where I’m comfortable drawing the line with regards to how much I include AI within my own business.


From an editing perspective, I’ve always enjoyed developmental, line and copyediting more than proofreading, and these are areas where – at least currently – I feel a human brain is superior to an AI one (e.g. character development, story progression, cohesion etc). I have focused my editing business on these areas.


Basically, I will make my humanity part of my brand moving forward.


I have also added gen-AI use to my editorial Terms and Conditions. BookBub recently published a survey which suggested that authors are split 50/50 on whether or not they are pro- or anti-generative AI. At the moment, I target those authors who do not wish to use gen-AI, and I require full disclosure of how – and to what extent – gen-AI has been used before I work with a client. I'm not necessarily against working with an author who has co-written with AI, but I am not going to edit something that a human author hasn't had significant involvement with. As an editor, I refuse for my role to become 'make this AI text sound like a human'. If more authors start using gen-AI in the future then I may have to shift how I run my business, but this is my decision for now. I feel better for owning my position and intentions, and I know how I will approach these changes going forward.


I reserve the right to change my mind and adjust my business model in the future. However, for now, the ethical issues surrounding this technology mean that I will emphasise my humanity as part of my writing and editing brand, rather than using gen-AI.


CONCLUSION

It is important to understand that these developments are very new and potentially disruptive, and the picture isn’t as black and white, or as all-or-nothing, as some would like to believe. There are lots of grey areas and unanswered questions.


One thing is becoming clear: we can no longer hide from this. We need to find out as much as we can, and make active and informed decisions about whether, how and to what extent we may – or may not – include generative AI in our future businesses.


Each of our writing careers is a business, whether we think of it that way or not. Like any other business, we need to examine the coming changes and make active, informed decisions about how we will deal with those changes. While AI and the rules and regulations around its use may shift and evolve, the one thing we cannot do is bury our heads in the sand and hope it will go away. It is up to each of us to decide how we will move forward, in the ways that work best for us personally.


BEFORE YOU GO…

Do you have any thoughts on generative AI and / or the issues discussed above? Please join in the discussion (contact details below).


Please feel free to comment on the article and/or contact me if you have any questions:


Socials: @cbentleywriter on most of them!




Buy me a coffee: https://ko-fi.com/clairebentley

I welcome respectful and friendly discussion on the topics I write about, including if your opinion differs from my own.


Disclaimer: generative AI

I do not use generative AI to produce or inform my blog, my images, or my fiction. All of my content is generated by the chaotic firing of my own (human) brain! (I have access to some images through my Wix subscription).

I do not consent to the use of my content, images, or fiction to train generative AI models. Please contact me to discuss permission and compensation if you wish to use my content in this way.

Comments


bottom of page