AI, Artists and Authors

Popular Book Series For Adults

Just recently, National Novel Writing Month (NaNoWriMo) released a statement regarding their stance on the use of AI in their annual writing event. Since then, there has been huge public backlash, members of their writer’s board have stepped down, sponsors have pulled out and they have amended the original post. 

The position that NaNoWriMo has taken is that they are entirely willing to allow the use of AI in their event and that they believe that the “categorical condemnation” of AI has undertones that are ableist, classist and otherwise problematic. The amendment to their original Zendesk post, which I firmly believe was primarily to cover their own backside given the immediate and severe backlash, further explained that they fully realise that there are bad actors in creative spaces making use of AI and that such uses go against their values. Notably, this amendment did not actually change their policy in any way. You can read their actual post here if you’re interested.

In light of this big shake up in the writing community, I thought now might be a good time to talk a bit about how I feel about the use of AI in creative spaces. I’ve been wanting to do this for a while. I might as well do so while it’s topical.

Before we go any further though, I do want to be clear. I am not a programmer. I am not an expert on how AI software is created and functions. I’m pretty well-informed and I do my research, but I can only speak to that in broad strokes. I am however a person who works in creative spaces and I feel like that means I’m pretty well-positioned to talk about the impact AI can have in those spaces. Of course though, I will primarily be talking about AI and its relationship to writing.

What is Artificial Intelligence?

I feel like most people understand this already, but to preface everything else, it might be worth going over. True artificial intelligence doesn’t actually exist. ChatGPT is not sapient or sentient and it isn’t about to become so. When we talk about AI, we are talking about software that is designed to perform specific tasks with a degree of complexity and adaptability.

As it stands, AI as we know it comes in two general flavours: traditional AI and generative AI. Traditional AI is essentially a program that is given a set of rules and parameters. As new information is fed into the program, it can make adjustments according to the rules it operates under. Generative AI is the new kid on the block that has been causing all the controversy. Generative AI is a program that is essentially trained to recognise patterns. As more and more text and images are fed into the AI, it starts to recognise how certain words, shapes and colours can fit together in response to a prompt. Then, when given a prompt, it can create something using those words, shapes and/or colours. This is a gross oversimplification of the process, but that’s the gist.

Why the Controversy?

Frankly, there are a lot of reasons for the controversy surrounding AI. However, the vast majority of this controversy is based around generative AI, not traditional. As far as I can tell, traditional AI is not capable of creating the same problems as generative AI. So, I’ll be predominantly talking about generative AI throughout this section.

Narrowing things down further to focus purely on the problems AI causes in creative spaces, it boils down primarily to two issues: AI and theft and AI and industry. Both of these categories are multifaceted in and of themselves. As such, I’m going to be giving you the abridged version. Otherwise, this would end up becoming more of a dissertation than a simple blog article.

AI and Theft

The relationship between generative AI and theft primarily comes in two forms. The first is explicit theft: people using AI to essentially recreate the work and style of an artist they don’t want to pay for. Essentially, the AI is trained on images it has sampled from an artist - usually without paying for it - and then recreates something similar to those images. Sometimes so similar it isn’t much different to doing minor edits in an image editing program. This can be taken a step further if the person prompting the AI then sells that art. I don’t think it is hard to understand why people are mad about someone using computer software to get their hands on their art without paying them even for personal use. When you add in the people who are actively selling these bootlegs? It’s hard to make money as an artist. Especially as an independent artist. If someone actively plagiarised my book, changed the title and then sold it at a cheaper price; I’d be furious too.

The second part of this I actually addressed a little already in the first paragraph of this segment. It’s a broader kind of theft. We talk about generative AI creating new art - written or pictorial. But, as I said earlier, really what it is doing is pattern recognition. To achieve this, an AI program has to be trained on a very large sample size of other, pre-existing work. Often, this is work scraped from the internet or independently uploaded to the AI without the consent of the original creator. Then, even if someone isn’t explicitly trying to recreate their style, their art plays a part in the pattern recognition process that results in a new creation. This newly generated AI story or picture is built off of the work of real human artists who are not being compensated for the time and effort they put in. Every AI manuscript, painting, song or animation that someone creates in minutes by giving their software a prompt is built on hours, days and years of a real artist’s blood, sweat and tears and that artist is getting nothing for it. This AI work is then being sold. It is used in movies, books and games and a third party who did none of the work is making the profit. Companies that are selling the AI programs that enable this theft are by default profiting off of this stolen art as well.

It’s not dissimilar to piracy. I have definitely heard the “no ethical consumption under capitalism” argument made when it comes to AI. However, there is a big difference. Pirating a movie, show or game is typically at the expense of a corporation that is still making copious amounts of money and has already paid its artists. The majority of the people who are being stolen from when it comes to generative AI are smaller, independent writers, musicians and visual artists who are already struggling to make a living off their craft. Wherever you stand on the ethics of piracy, it’s illegal for a reason. The theft generative AI is used for, and has been dependent on, is much more harmful to creators.

AI and Industry

Even beyond the argument that generative AI perpetuates theft, its unchecked use has wider repercussions for creative industries. I don’t just mean the arts either. AI is a serious threat to those working in areas like advertising, editing, programming and web design too. For the sake of expediency and sticking to my own lane though, I’m obviously going to be focusing on how it impacts the arts.

As is becoming a recurring theme in this article, the problem generative AI poses to creative industries can be divided into two categories: oversaturation and labour.

The arts are already highly competitive and very saturated fields to work in. One look at Amazon or Etsy should tell you that. For example, no matter how much we might try to support each other, every author is in competition with the others in their genre - arguably even those outside their genre. While someone buying one book doesn’t inherently mean they won’t buy someone else’s, any given individual only has so much time and money that can be spent on books. That means there is a finite number of books each person can read and we are all trying to be the person responsible for one of those books. On top of that, there are a lot of authors. The majority of sales are obviously going to the small percentage of big name authors. Your Stephen Kings, Danielle Steels or celebrities with ghostwriters. That leaves a comparatively small piece of the pie for the rest of us to fight over - whether traditionally published or independent. There are millions of authors out there vying for those sales even before AI enters the picture. Now consider: generative AI can churn out more books and much faster. This adds an influx of content to an already highly competitive market. That makes it harder for actual authors to earn a living off of their work.

Naturally, this is something that is easy for businesses to abuse. A good AI program can do the work of multiple human writers or artists, faster and at a fraction of the cost. The work might not be quite to the same standard. It might be somewhat generic. But that doesn’t matter when the software is much cheaper than a team of artists and doesn’t require lunch breaks or sick leave. Then a smaller team can be kept for quality control. Reducing the human element is cost effective and even massive companies that don’t need to reduce costs will still do so for the sake of greater profit. Technological advances have always caused changes in the commercial landscape. Old professions disappear and new ones emerge with each technological leap. But this isn’t the same as automation making factory work not just more efficient but easier and safer. That still created new careers that ended up employing millions while making the work environment healthier. Replacing human artists with AI doesn’t have that same effect. More jobs will always be lost than are gained. Progress shouldn’t come at the expense of people’s livelihoods.

Besides, maybe sentimentality shouldn’t have a place in this discussion, but the human element is what makes art beautiful. The uniqueness. The personality. For want of a better word, the soul. Generative AI isn’t really art. Art is distinctly human. Generative AI is just mashing our work together and producing something ultimately hollow. I don’t want a world where machines are replacing artists who still have to clean their own houses. Technology should better humanity, not remove it from the equation.

Can AI Be Used Ethically In Creative Spaces?

In NaNoWriMo’s statement and subsequent amendment, they talked about a few things. Originally, the crux of their argument was that being categorically anti-AI was ableist and classist, basing their argument around the idea of accessibility. The addendum further clarified that AI can be used in ways other than simply creating a new story from a prompt and that not all AI is even generative, lacking the capacity to do anything of the sort.

It’s easy to write these arguments off as excuses, ass-covering or otherwise statements made in NaNoWriMo’s self-interest. I said it already, I personally believe that this is very much the case. However, that doesn’t mean that these points aren’t worth discussing. Given that, I’m going to talk a little about NaNoWriMo’s points regarding AI and then move on to a more general discussion of ethics when it comes to AI and art.

Anti-AI as Ableism

The argument here is that disabilities - physical, intellectual or otherwise - present an obstacle to art that AI can help people overcome. Simply put, you cannot deny that a learning disability makes it harder to put a story into words. A person with cerebral palsy affecting their hands is obviously going to have a harder time drawing, if they can at all.

However, speaking as an autistic person with a chronic illness and neuropathy in my hands, I don’t think that this argument necessarily makes generative AI any more ethical. I don’t think having a disability makes it okay to steal from someone. While we cannot deny that a disability can be a difficult hurdle to overcome in pursuing the arts, we also cannot deny that generative AI right now is typically built on a foundation of work taken without its creator’s consent. We should absolutely strive to make creative industries accessible to people with disabilities. However, until such a time as artists are receiving adequate compensation for their work being used to train AI, generative AI cannot be considered an ethical way to overcome the barriers created by a disability. It isn’t okay to make your own life easier at the expense of someone else’s livelihood, even indirectly. Being against unethical work practices and theft isn’t ableist.

That is specifically generative AI though and, as we have established (and as NaNoWriMo have reminded us) this is not the only kind. When it comes to traditional AI, I see no reason this couldn’t be ethically implemented in writing or visual art right now. I’m no expert, but I believe more traditional AI could be used to make a more accurate and intelligent spelling and grammar checker, improve speech-to-text functionality, smooth out particularly rough and shaky line art and even keep colour within lines (although the fill tool can already do that just fine). I don’t see why functionality like that - which would be helpful to everyone at the expense of nobody - couldn’t be used.

Unfortunately though, the blanket allowance of AI in a creative space like NaNoWriMo - or worse; the workplace - doesn’t just include tools for accessibility. It is tacit approval of content created by generative AI as well. If you want to allow traditional AI tools to improve accessibility, you should still explicitly ban generative AI right now. Not doing so is, at best, a huge oversight of a very simple solution to the problem or, at worst, an unethical decision based on an unwillingness, or inability, to differentiate properly between human work and AI generated content.

The reality is that most people are not aware of the difference between traditional and generative AI. In the current social climate, talk of AI is assumed to mean generative AI. So, unless you explicitly ban generative AI, people are going to assume you are giving the go ahead to generative content. Being angry at that isn’t ableist and the onus is on you to take a stance against generative AI to begin with.

Anti-AI as Classism

The NaNoWriMo statement states that an anti-AI stance is classist on account of the inequitable access to human assistance. Basically, because some writers are unable to afford the assistance of a professional editor, disallowing the use of AI to take on this role instead is discriminatory.

Again, there is a kernel of truth to this claim. Hiring a professional editor does cost a fair amount of money. If you want to pay people who beta read for you or act as sensitivity readers, that’s even more expensive. However, there is a big difference between what an editor is going to do for you compared to what an AI program is going to do for you. In my experience, an editor reads through your work and notes down areas where there are spelling and grammar flaws, plot issues or just parts of the story that could be written better. It is then up to you as the author to then re-write the problem parts of the story according to the notes you receive. This is very different from having a generative AI use your work as a base and then doing the rewrites itself. In one instance, you’re still doing your work as the writer. It is not classist to condemn claiming the product of a program, built on the work of others who actually put the work in, as your own.

There’s a little more leeway with traditional AI. Since it can’t write for you, using it as a tool to locate spelling and grammar errors or frequently repeated words and phrases is probably fine. That’s not so different to using any other spellchecker or even a human editor. Although, I would argue that you’re going to get more out of working with a human editor. There’s also the simple fact that traditional AI can only work with those mechanical aspects of writing and cannot provide the more narrative-focused services of an editor or fill the role of a sensitivity or beta-reader. It is definitely not the same as using generative AI in your work.

But NaNoWriMo’s statement has the same problem here as it does in dealing with claims of ableism. While, post-addendum, they acknowledge that there is a difference between traditional and generative AI and suggest they aren’t necessarily giving their approval to have AI write your story for you, they do nothing to actually prohibit the use of generative AI. You simply cannot claim to recognise the problem of allowing “bad actors” free reign to use AI, but do nothing to actually restrict the use of generative AI.

AI and NaNoWriMo

The amended version of NaNoWriMo’s statement does raise some good points. Traditional AI is much less problematic in creative spaces than generative AI is. Even though I am personally weary about letting even a drop of AI into these areas, I can’t really find an objective reason to say we should keep traditional AI out of writing - or the arts in general. However, the abject refusal to properly acknowledge that generative AI is a serious problem in creative industries and the subsequent lack of any effort at all to prevent cheating with generative AI bothers me. It bothers a lot of people. And I cannot excuse calling those of us who take issue with this inaction ableist or classist to justify that inaction.

It is true that disability and socio-economic status can be barriers towards being an author. I have experienced both of these things first hand. We should be making an effort to make creative industries accessible to a diverse range of voices. We should be creating spaces where people facing these hurdles can be heard and can have easier access to the assistance they need. However, that is not what is happening here and it is not ableist or classist to call that out. It isn’t ableist to be against plagiarisation and theft. It is not classist to expect people not to claim the work of others as their own. Yes, traditional AI isn’t nearly as problematic. NaNoWriMo’s policy doesn’t have that kind of nuance though. Instead, their statement insults the people who recognise the problems in their policy, pays lip service to their concerns and ultimately does nothing to actually address the issue of generative AI.

I don’t think that it’s a coincidence that NaNoWriMo has adopted a completely open stance towards AI while having an AI company as one of their sponsors. I don’t think it is okay to pretend this is an issue of accessibility and that you have humanitarianism as your motivation while actively promoting an industry that is hurting artists’ livelihoods.

However, this article isn’t just about NaNoWriMo. I might have used their policy statement as a lens to examine the issue of AI through most of this segment, but this is something much bigger than just NaNoWriMo.

Ethical AI

The reality of the situation is that the AI industry is currently booming. More and more companies are exploring ways to incorporate AI into their products and business practices. Traditional AI doesn’t seem to pose much of a problem here, but the generative AI cat is out of the bag and we aren’t going to be able to stuff it back in there. We can only do what we can to regulate it and mitigate the harm it can cause. It isn’t a question of if generative AI in creative work is ethical. It is too late to ask that question now that it is already worming its way into our lives. We must make generative AI as ethical as we can. Luckily, there are things we can do.

The first - and most important - of these steps we can take is regulating AI and creating laws that protect actual human creators. Surprisingly, the law is being unusually quick to adapt to changing technology in this case. In a lot of places, steps are already being taken to do this. In the US, as an example, AI generated works cannot be copyrighted as the program is the author or artist responsible and has no legal right to hold the copyright. However, there is still work to be done. Here in Australia, for example, the law is still exploring exactly where it stands on issues on generative AI. In my opinion, the answer should be clear. It should be illegal to train AI on the work of others without their consent and without providing compensation. Work created by generative AI should not be protected by copyright either.

There is also the option of training AI programs on works that have already entered the public domain. I don’t think that this is a perfect solution - especially without also implementing legislation that protects artists. Any work generated by an AI is work that isn’t going to an actual person who needs money to eat and pay their bills. However, it does mitigate the issue of theft. You can’t steal something that is in the public domain.

Generative AI could, in theory, also actually be used to improve working conditions in artistic industries. The big example I think of here is animation. Animators have an extremely demanding job, especially those who are still working with more traditional types of animation. Generative AI could be a useful tool in reducing the workload on animators by limiting the number of things they have to draw or model over, and over and over again. But it should not be used to replace those artists.

Despite the questionability of the specifics of NaNoWriMo’s statement, policy and motivations, they were also right that generative AI might have a place in making various artforms more accessible to people with disabilities. A generative AI program could compensate for the aspects of the artistic process that some people are physically unable to complete. However, in order for this to be done ethically, the AI has to be trained ethically. That means that, unless the AI is specifically trained using work in the public domain, it cannot use art without the consent of the originating creator and without providing them the option for compensation. Laws would also need to be in place to prevent businesses from exploiting this to cut corners. I could see a world where generative AI, regulated and trained ethically, could be readily available for personal use.

Conclusion

As much as I wish discussions of AI in creative spaces could be simple, black and white affairs; the truth of the situation is that things are more complex than that. Traditional AI represents a far less problematic addition to the lives of creators. Generative AI exists. It is out in the world and I don’t think it’s going anywhere. But, theoretically, we can reduce the danger it poses to actual human artists.

However, right now, we do not live in this hypothetical world of well-regulated, ethically trained generative AI. Right now, we cannot trust that people using generative AI are not passively or actively stealing from real creatives. At this point in time, generative AI is a threat to the livelihoods of visual artists, musicians and writers. It is unethical to make use of AI. It is even morally dubious to use generative AI purely for private, personal use because you are still supporting a predatory industry that is doing real harm.

So, right now, if you are chewed out for openly allowing free reign to use generative AI in your creative space or making use of generative AI yourself; it’s not because someone is being ableist or classist. It’s probably because you deserve it.

Previous
Previous

Pen to Podcast

Next
Next

Review: The Mime Order (Bone Season Book #2)