Since the launch of ChatGPT in November 2022, there has been a total of 12 lawsuits in America against OpenAI and Microsoft, balancing out at nearly one per month. This paints a clear picture of the concern that has been brewing over the perceived threat that generative AI poses to the creative industries. The lawsuits have been filed by authors and publishers over claims of copyright infringement through the unauthorised and unethical use of their content as training data for ChatGPT.
Similar concern has come to light in the UK too, with research from the Copyright Licensing Agency (CLA) finding that 76% of content right holders are aware of their content being used by AI without permission or payment. Among the respondents were publishers, who unanimously indicated their awareness of this at a rate of 100%. This is contributing to a negative outlook towards AI within parts of the creative community, with 55% of photographers concerned about a loss of income because of generative AI, for example.
Nevertheless, the overall approach towards AI remains largely positive in the UK: 80% of companies in creative industries are open to collaboration with AI if they think it will benefit them. However, the question of whether or not AI will benefit them remains a key issue at the heart of current copyright disputes.
Writers’ perspectives
The most recent lawsuit against OpenAI and Microsoft was filed by two ex-journalists and non-fiction writers: Nicholas Basbanes and Nicholas Gage. In a recent interview with the AI Journal, Basbanes said that his main motive for taking legal action was to join other writers in defending their exclusive right to the work they have produced, as declared in Clause 8 of the First Amendment. This not only highlights the feeling of injustice within the wider writing community over AIās use of copyrighted content, but also points to the role of federal law in addressing copyright concerns in America so as not to erode writersā trust in the Constitutionās protection of their rights.
Basbanes raised further concerns about the impact that AIās use of published work is having on the perceived value of information. Given the speed of generative AI models, their rapid intake of writersā content can give a misleading impression of the original time and cost required to gain information in the beginning. As stated in the official complaint, Basbanesā books are based on extensive research that he conducted himself, devoting hundreds of hours to conducting interviews, investing his money into travel expenses, and paying for access and publishing rights to copyrighted material he used.
This raises a critical point: if individual writers have to ask permission to access copyrighted material and be prepared to pay for it, why shouldnāt OpenAI and Microsoft?
Another concern for writers is AIās abstraction of information from its wider context, which is a key element in its training and resulting generative ability. This process can arguably be detrimental to the value of some original writing, particularly autobiographical works or eyewitness accounts where unique historical and factual information may be provided at personal cost to the author.
For example, Nicholas Gage, the other writer behind the latest lawsuit, is best-known for his memoir Eleni, which was made into a film 1985. Based on the authorās experience of growing up in the aftermath of both the Second World War and the Greek Civil War, his poignant memoir inspired US President Ronald Reagan to initiate summit meetings to end the arms race with the Soviet Union in 1987. This demonstrates the incredible power of the human voice and personal experience to impart factual information with a transformative message.
Access to this type of content may contribute to an AI modelās holistic historical knowledge and overall accuracy. But it comes at the price of putting an individualās story in a de-personalised melting pot of data, and stripping the information of its connection to an individual human experience.
However, this is arguably just the natural cycle of information, which happens with or without AI. According to Basbanes, whose books have been widely cited in academia, authors cannot expect to have control over how their work is used. Indeed, humans are free to read and respond to each otherās published work as they please. This calls into question whether AIās use of content as training data is really much different to humansā use of content for general learning purposes.
Is intellectual property unique to humans?
One of the reasons that the copyright disputes have caused such a stir within the creative industries is that they raise the issue of whether creativity is a uniquely human ability, or whether it can be replicated by machines. While it is now widely accepted that AI can outperform humans in several areas such as memory and processing speed, creativity has been traditionally deemed one of the key abilities that differentiates human minds from artificial ones.
Philosopher Patrick Stokes talks about this in his article Ex Machina, published in the New Philosopher magazine:
āWeāre affronted at the idea of a mindless computer painting Van Goghs or writing essays or triaging complaints not just because we think theyāll do a bad job, but because we think theyāre missing the originality that only humans, with their free and creative spirits, can bring to these tasks.ā
Patrick Stokes, contributor to New Philosopher.
However, as much as we might like to think that creativity is a uniquely human quality, it is proving increasingly difficult to pinpoint exactly what it is that sets our human intellect apart. As tech writer Rob Enderle points out in an interview with Bloomberg, a lot of human-generated content that we might consider āorganicā is nevertheless informed by and taken from other peopleās research and mental labour, which is essentially what ChatGPT does. According to Enderle, the only difference between what humans do and what AI does is that AI is far quicker and can process a far greater amount of content.
Nevertheless, while this may be true from a broad and exclusively computational perspective, many would insist that several differences still remain between generative AI and human creativity. In a recent report by the CLA, CEO Mat Pfleger argues that AI has not yet mastered the linguistic skill of writers, such as using dry humour to press a point. Also, a recent poll by the AI Journal found that respondents perceived the biggest difference between AI-generated articles vs. human-written ones to be the respective absence or presence of critical thinking.
This points to a fundamental difference between artificial and human creativity: humans have their own individual agendas and opinions which shape how they process information and rework it into new forms. This is why within human communities, the same set of information can evoke very different responses, resulting in debates and disputes. Within a hypothetical community of AI chatbots, it is difficult to imagine how a dispute could arise if all chatbots were fed the same dataset. Thus, while AI does share the ability to rework information into new forms, it does not have the (currently) unique human ability to bestow that information with a whole new meaning.
Following a similar chain of thought, Chinese artist Ai Weiwei explores the similarities and differences between AI and human creativity in his latest exhibition āAi vs AIā, where he publicly presents AI with a set of provocative questions in an attempt to lay bare the distinction between the human thirst for answers and the machine that seeks to satisfy this. In the build up to his exhibition, he declared that art which can be replicated by AI is meaningless. While this might seem like an extreme statement, it effectively raises the point that human art is about much more than mechanical and technical skill; it is about using images to evoke certain feelings or responses in the viewers.
Pressure on policymakers
So far, the main debate over the uniqueness of the human intellect has taken place largely in the theoretical and hypothetical spheres. However, with real world developments such as the copyright lawsuits against OpenAI and Microsoft, it is clear to see the increasingly tangible implications of this issue. Our understanding of whether and how human intelligence is different to artificial intelligence can shape our opinion on questions such as whether AI generated content can replace content written by humans, and whether human intellectual property should be protected differently to content produced by AI.
Already, there is pressure on policymakers to have made up their mind on such issues. For example, the UK Supreme Court was recently faced with making a decision that related to the debate over whether AIās creative abilities should be treated as legally equal to those of humans.
In this case, a computer scientist called Dr Stephen Thaler filed a patent application for some new tools that he had devised using an AI application he created called DABUS. But instead of naming himself as the inventor, he named his AI application, DABUS. According to Nick Reeve, AI expert and Partner at Reddie & Grose Law Firm, Dr Thaler did this to flag up the theoretical possibility of AIās future rights as a legal person, rather than because it was legally necessary.
The court ruled that AI could not be legally named as an inventor, reflecting a majority view that intellectual property is currently unique to humans. Nevertheless, the fact that these issues are being publicly raised in court demonstrate that there is growing uncertainty over the uniqueness of human intellectual property in our increasingly AI-dependent society. This is the real concern for writers and artists, and is the reason that the last year has seen so many copyright lawsuits against OpenAI and Microsoft. Indeed, in the context of AIās increasingly popular application in content creation, it is only natural for writers and publishers to seek protection for their intellectual property, and by extension, their livelihoods. In view of this, OpenAI and Microsoftās unauthorised use of copyrighted content for their generative models is the icing on the cake.