For all its praiseworthy qualities of speed, accuracy, and comprehensiveness, AI-generated content arguably has one fatal flaw: it’s boring.
But is this really the case?
For one thing, the quality of being ‘boring’ is highly subjective. This means that even if AI-generated content has some inherent qualities typically labelled as ‘boring’, this doesn’t mean that AI-content is perceived as boring by a majority of people.
Nor does it mean that the question of whether a piece of content is generated by AI or a human is even a relevant factor in determining whether it is boring or not. Instead, it is typically factors such as personal writing style, subject matter, and thematic structure that can determine how boring a piece of content is.
Furthermore, given that users can customize Generative AI model responses with different prompts to tweak the writing style and thematic structure of the content produced, the quality of being ‘boring’ is arguably something that could also be teased out of AI-generated content by feeding the model with the right prompt.
For another thing, differentiating between AI-generated and human-generated content is no easy task. In fact, many of us aren’t able to tell the difference between the two in many contexts, if at all – and this is why AI-generated deepfakes are such a concerning threat to security.
The fact that AI is able to replicate human-generated content to a such an accurate degree is indeed a strong argument against the fact that AI-generated content is inherently boring, or at least significantly more so than human-generated content.
Additionally, there is huge variety in how interesting and engaging human-generated content, and chances are, the most boring piece of content most of us will ever read will be written by a human.
Nevertheless, when we look behind the scenes and consider that AI-generated content is produced through the purely logical rationality of algorithms, there is arguably some inherent limitation of how engaging, emotive, and surprising AI-generated content can be.
Below, we consider some characteristic qualities of AI-generated content which could be seen as markers of boring content. We then go on to question the key differences between AI-generated and human-generated content, especially given the difficulty in differentiating between the two. Additionally, we explore the broader implications of using Generative AI in content creation and creative workflows, bringing you insights from industry experts on the limitations, benefits, and risks of AI-generated content.
What is it that makes AI-generated content boring?
Repetition
There is a fundamental difference between the way in which content is generated through AI, and the way it is created by humans.
In contrast to even the most logical human mind, AI models rely on algorithms to generate content. In heavily oversimplified terms, what these algorithms do is essentially calculate the statistical probabilities of contextual word/phrase occurrences within the datasets they are trained on in order to create a cohesive and accurate output.
In this sense, AI can never produce any original or new material; instead, it merely regurgitates the data it has been trained on but in a reshuffled format determined by the user prompt.
This means that AI-generated content is inherently prone to repeated phrases which are statistically likely to occur in its training data, even though words and phrases are reshuffled to create different and unique content. However, the extent to which this enabled AI models to create unique phrasing and avoid repetition remains questionable.
This issue came to a head earlier this year with a load of copyright infringement lawsuits which were filed by plaintiffs against some of the Generative AI model creators. For example, in the case of the New York Times vs OpenAI, the plaintiffs alleged that ChatGPT had regurgitated parts of their copyrighted articles word-for-word. Although this was later contested by OpenAI, this has not been a one-off allegation, and ‘memorization’ is now widely recognised as a potential flaw in Generative AI within the creative industries, bringing into question the ability of Generative AI to avoid repetition.
But does this tendency for repetition really make AI-generated content more boring? Arguably, the way that AI models merge and reshuffle existing phrases to create new pieces of content shows some degree of similarity to the creative human practice of crafting a unique sentence to summarize information, or taking other people’s ideas and putting them into our own words.
Furthermore, according to Curt Raffi, CPO of content governance software, Acrolinx, the quality of content that an AI model spits out is heavily contingent on the craft of prompt engineering, i.e. the wording of the requests that you feed into the AI model.
“I don’t think that AI-generated content is innately more boring than human-crafted content, but I also think it depends on how you engage the large language model. Prompts which are misunderstood today are a science in themselves. How you ask something is what determines the type of creative, imaginative, rich answer that you get back. And so I think the whole concept of prompt engineering, and prompt engineering as a discipline, is an evolving science. Just helping people learn how to answer or ask those questions of the LLM will determine whether the answers are boring or not. It’s that old phrase, ask a stupid question, get a stupid answer. It’s not that different with a large language model, because they were programmed with content written by people. So, if you don’t ask the right questions, you may not get the right answer out of them.” ~ Curt Raffi, Chief Product Officer at Acrolinx
Predictability
Perhaps what is more important than phrasal repetition is the fact that the substance of AI-generated content tends to be very predictable. Trained to calculate the most likely probability based on existing data, AI-generated content is far less likely to surprise you than content which produced by humans, who are influenced by their own experiences, beliefs, and intuitions. This can shape the content that humans create in a way that is just not possible for AI models.
However, the predictability of AI-generated content is also highly dependent on the data it is trained on. For example, if a large portion of the data used to train an AI model included research that counteracted the general consensus held by most people on a particular topic, its outputs would most likely be unpredictable ones that would surprise us.
More to the point, if the model did surprise us, would we even allow ourselves to trust it, or would we just label the unexpected output as a ‘hallucination’ that should be disregarded?
As Raffi points out, hallucination is a key risk that people think about when they use AI in content creation. Chances are, if AI-generated content was not predictable, this would lead to concerns over its accuracy.
“I think the biggest risks in people’s heads are: Are there lies or inaccuracies in the content? Has the AI hallucinated? Has it not been prompted correctly to give the right type of generation of content? ~ Curt Raffi, Chief Product Officer at Acrolinx
In light of this consideration, the predictability of AI-generated content is more likely to be perceived as a sign of its reliability, rather than as an inherent quality that makes it ‘boring’. Nevertheless, the fact that the majority of use-cases for Generative AI centre around the summarization and organisation of data, or the automation of less creative content such as reports and emails, is testament to the fact that AI is best suited to generating content that has a predictable template and requires only limited creativity.
Lack of opinion/personality
Have you ever noticed how hard it is to get an opinionated statement out of Generative AI models? I’ve tried multiple times, but have never managed to coax a particularly colourful or personal opinion out of ChatGPT.
This is arguably another inherent limitation of AI-generated content that can make its outputs boring and inconclusive to read. For this reason, AI-generated content can certainly leave you feeling more informed on a topic, but probably not particularly entertained or strongly opinionated about it. This is significant because entertainment, or the validation/challenging of our existing belief systems, is a major incentive for humans to engage with content in our current society of information overload.
Humans have been telling stories ever since they could communicate, which is why the qualities of opinion, emotion, and personality have become so intrinsic to most forms of human communication. In fact, these qualities are so powerful and evolutionarily important that their presence/absence can significantly impact the degree to which humans are open and receptive to new information.
Currently, AI is unable to replicate these qualities because it has no individual character, no personal investment in the content it generates, and no belief system. But perhaps this will change. In fact, it might already be changing, with Generative AI models now being fine-tuned to embody the character and essence of particular brands in order to provide enhanced customer service.
According to Raffi, a key consideration for businesses in determining how and when to use AI-generated content is whether or not it can maintain a level of rapport with the target audience, and whether it embodies the brand identity.
“One key consideration for AI-generated content is, does it understand the audience? Is the tone of what is being written appropriate for the audience – the age level, the wording choices, the way it addresses the audience itself? And then also things like, does it represent your brand appropriately. If you’re aggressive and loud and in your face, maybe that doesn’t meet your brand. Maybe it doesn’t use the style guides that should be used by all writers of content.” ~ Curt Raffi, Chief Product Officer at Acrolinx
This suggests that AI models are already considered to be capable of creating content that reflects a personality or set of opinions, countering the idea that AI-generated content is inherently boring due to an ability to embody an opinion or personality.
An illuminating use case: AI in comedy
A great real-life context in which to test whether AI is innately boring or not, is comedy. While comedy has many sub-genres which appeal to different tastes and personalities, the fundamental test of good comedy is whether it can make you laugh, shock you, offend you, or surprise you (or hopefully all of the above).
In fact, good comedy material is by definition essentially the opposite of boring content.
For this reason, the fact that AI is already being used fairly extensively in comedy is strong support for the argument that AI-generated content is not inherently boring – or at least is not widely perceived to be within the creative community.
For example, theatre company, Improbotics, is pioneering the use of Generative AI within comedy and improv performances. Dubbing themselves as both a ‘science comedy show’ and a ‘live Turing test’, the company aims to explore the applications of AI in creating content that is engaging and entertaining, and push the boundaries of using nonsensical AI-generated outputs to enhance human creativity.
Differentiating between AI-generated vs. human-generated content
As the prevalence of AI in content creation grows, distinguishing between AI-generated and human-generated content has become increasingly important. Understanding these differences can help readers appreciate the unique qualities each type of content brings and recognize why some may find AI-generated content innately boring. Here are several key indicators to differentiate between the two:
- Writing Style and Voice
- Emotional and Contextual Nuance
- Creativity and Originality
- Authenticity and Trustworthiness
Differentiating between AI-generated and human-generated content involves examining the writing style, emotional depth, creativity, and authenticity of the work. While AI-generated content offers consistency and efficiency, it often lacks the unique qualities that make human-generated content engaging and relatable. Understanding these differences can help readers appreciate the strengths and limitations of each approach and recognize why some may find AI-generated content innately boring.
Bored yet? Think that the above perhaps sounded a little generic or repetitive? Well, if you did, this might be because the beginning of this section was generated by ChatGPT in response to the prompt:
Write me a section on ‘differentiating between AI-generated vs. human-generated content’ for an article on the topic of whether AI-generated content is innately boring.
But whether you could detect the slight change in writing tone or not, the evolving capabilities of Generative AI models mean that there is less and less to differentiate between AI-generated and human-generated content.
However, opinions are quite divided on this issue. Some people have strong convictions that AI cannot fully replicate human creativity, or that human-generated content will always be distinguishable from AI-generated content due to its unique creative flair.
Meanwhile, others take the view that AI is already able to replicate human-generated content to such an accurate degree that it is more or less impossible to tell the difference. In support of this latter view is the fact that AI is already widely used by companies for free content creation, by students for school assignments, and in the workplace to pick up human slack.
Although measures have been taken to try and detect the use of AI in such applications (particularly for school assignments), the evolving capabilities of AI mean that this could well be a losing battle. But more to the point, it highlights how difficult it can be to reliably differentiate between AI-generated and human-generated content.
According to our own market research, there is no strong consensus over how easy it currently is to differentiate between AI-generated and human-generated content.
Nevertheless, we found that 36% of respondents to an AI Journal LinkedIn poll thought that they could tell the difference between the two pretty accurately, versus only 11% who thought they could not differentiate between them well at all. Additionally, a significant proportion of respondents thought they could tell the difference but only in certain contexts (33%), or only if they tried very hard (20%).
This not only highlights the importance of context in determining the difference between AI-generated and human-generated content, but also indicates that even if AI is more prone to generating content that tends to be more boring and generic, this is not a significant limitation of AI in content creation.
Limitations of AI-generated content in the creative industries
Despite the potential of AI to accurately replicate human-generated content, there are still some limitations to the capabilities of AI in the generation of creative content that are recognised by experts working at the intersection of Generative AI technology and content creation.
Ruslan Khamidullin, Co-founder and CTO at Filmustage, an AI platform that enhances the film pre-production phase, explains that AI-generated content in the film industry is not yet good enough to be used without human editing.
“In my opinion, we are very far from full automation [in content creation], if it would ever be necessary. Generative AI (if we are talking about image, audio, and video generation) never gives you 100% control over the output, which is crucial for creators (at least I think it is so). Yes, tools like Suno, Pika, and Stable Diffusion can generate amazing results, but in my opinion, they still look half-baked and require additional editing. This editing can be very time- and resource-consuming.” ~ Ruslan Khamidullin, Co-founder and CTO at Filmustage
According to Khamidullin, this limitation may confine the use of GenAI to use-cases in the film industry such as idea generation, script breakdowns, synopsis crafting, and budget management, rather than the creative process of script writing, even despite attempts to further optimize AI-generated content.
“GenAI today is a great tool for prototyping and early development, but not a silver bullet. AI companies are trying to solve the puzzle by adding more training data and computational resources, but this approach does not guarantee a breakthrough.” ~ Ruslan Khamidullin, Co-founder and CTO at Filmustage
Similarly, Alexey Skobelkin, CPO and Head of Mediatech of Raw Ventures, an investor in Filmustage and other creative tech startups, argues that AI is still currently lacking in depth and nuance, which restricts its role in content creation to a supportive one.
“AI is not inherently capable of generating complex narrative content with the same depth and nuance as human creators. Its primary role is to assist in the creative process by providing tools for visualization and organization, based on the author’s vision. This supportive role ensures that AI enhances the workflow without replacing the unique contributions of human creators.” ~ Alexey Skobelkin, CPO and Head of Mediatech at Raw Ventures
How can AI best be used in content creation?
In recognition of the creative limitations of Generative AI, the main application of Generative AI in the creative industries remains a relatively mundane one: increasing efficiency by automating less creative, background tasks.
“One of the key advantages of AI is its ability to handle routine and time-consuming tasks, which allows creators to focus their energy on more creative aspects such as plot twists, character development, and thematic depth. By managing these mundane elements, AI frees up more time for creators to invest in the artistry of their work.” ~ Alexey Skobelkin, CPO and Head of Mediatech at Raw Ventures
“Scale, efficiencies, ROI, how do we make people more productive? That’s been the goal of all our businesses and humanity, from time immortal. In any event, making sure that people can scale, and AI helps you do that. It helps you be more efficient, faster and we’ve seen some significant time savings using our generative AI tools alongside Acrolinx as well. So, we have our AI Assistant and our Get Suggestions products, and we’ve seen 20 to 50% increases in the output and return for a company using that for their authors.” ~ Curt Raffi, Chief Product Officer at Acrolinx
For Skobelkin, this more limited application of AI is not just a practical issue, but also an ethical one that not only conserves the quality of creative content by restricting it to human inventiveness, but also protects the livelihoods of human creatives.
“Ethically, AI should help with routine tasks, leaving the artistry to humans. Filmustage is a prime example of how AI can be integrated into the creative process without overstepping its bounds. By optimizing production processes, Filmustage helps streamline the logistical aspects of filmmaking, turning creative visions into manageable projects. This demonstrates how AI can support rather than supplant human creativity, ensuring that the end product is a true collaboration between technology and human ingenuity.” ~ Alexey Skobelkin, CPO and Head of Mediatech at Raw Ventures
Nevertheless, the automation of mundane background tasks is not the only way that AI can be used in creative workflows. Drawing on his own experience, Raffi points out how AI can act as a personal assistant to creatives, helping them overcome barriers such as writer’s block, distractions, and personal issues that can hinder focus.
“I think Generative AI can enhance our creativity, because think about all of the disturbances you’re getting during the day. You’re told put together a proposal or a technical document on this, and the fight with your significant other is in your head, your child might be in the hospital, it might be a natural disaster, a storm or something that you know someone’s involved in, or maybe you’re wondering about someone in the Olympics that you were rooting for – and your mind is distracted. Those can all impair our creativity. So I think having a tool like AI can really help us extend our creativity in new ways. Oftentimes I’ve used AI to help spawn my thinking, and I’ve gone, oh, wait, yeah, I could riff on that, and then I can go in different directions. So I see it as an extension of creativity, not a creativity limiter.” ~ Curt Raffi, Chief Product Officer at Acrolinx
The future of AI in content creation
Overall, the many existing use cases for Generative AI in content creation are still on the more conservative side, with the technology being used for the most part in the automation of more mundane, background tasks.
This indicates that there is still some way to go before AI is more widely trusted to generate more creative content. For now, AI still typically requires extensive human oversight in order to produce high quality, nuanced, creative content.
Nevertheless, the proliferation of AI-generated content is making human oversight increasingly difficult to enforce at scale. This could be a significant driving force of AI’s forthcoming prominence in autonomous content creation, as Raffi points out.
“Right now, the guardrails for AI-generated content are most often human. There’s someone proofreading, there’s someone saying, “what you wrote doesn’t match our style guidelines, and we need you to fix this”. Ideally, there should be some type of automated solution to do that, because people just can’t keep up with the volume of AI-generated content. So, you need to have automated guardrails, not just manual guardrails. Human beings won’t be able to do it at the scale that’s necessary.” ~ Curt Raffi, Chief Product Officer at Acrolinx
As AI is used in more bold and innovative contexts, such as Improbotic’s upcoming AI-infused comedy shows at Edinburgh Fringe Festival, it will become increasingly clear whether AI is innately boring, and, more generally, what exactly its creative limitations are.
In turn, this will help the content creation industry more widely by increasing awareness of the higher limits of Generative AI’s capabilities, highlighting where it does and does not require human oversight.