Future of AI

Artificial intelligence = artificial engagement? How to use Gen AI to cut through the noise

By Jo Sutherland, AI ethicist and managing director of Magenta Associates

In late 2022, I had an existential crisis. ChatGPT had just been unleashed on the world, and I found myself staring into the abyss. If generative AI could churn out articles in seconds, what did that mean for professional communicators like me and my team?

Thankfully, the panic didn’t last long. As I delved into AI and ethics, taking courses at the London School of Economics and collaborating with the University of Sussex on our CheatGPT? research project – a study designed to gauge AI user sentiment – I gained a clearer perspective. AI is powerful, sure, but not all content is created equal.

Generative AI has democratised content creation, but it has also devalued it. It has made it easier than ever to produce content, but harder than ever to stand out. It has supercharged efficiency, but at the expense of trust and engagement.

Business leaders and marketing professionals are at a crossroads. AI isn’t going anywhere. But the way we use it will determine whether it makes us better communicators – or simply floods the internet with more digital sludge.

The crescendo of AI-generated noise

In theory, AI should make life easier. It can write, edit, summarise, translate and optimise content in seconds. The problem? Everyone’s at it and yet very few understand how to edit and finesse what it throws out.

According to our research, 90% of online content could be AI-generated by the end of this year. The sheer volume is staggering. But volume and value are not the same thing. Much of this content is homogenised – AI models trained on similar datasets churn out eerily similar results. And so, we find ourselves in an endless loop of AI-generated blah blah blah.

Editors, journalists and comms professionals are already feeling the strain. The inboxes of media gatekeepers, once brimming with the usual mix of strong and weak pitches, are now drowning in AI-generated waffle. The tools might be new, but the problem isn’t – bad writing has always existed. Now, there’s just more of it.

The deprofessionalisation of content creation

AI has democratised content creation. That’s great. But in making it accessible to all, it has also blurred the lines between professional and amateur. When anyone with internet connection can produce something that looks well-written, the expertise behind high-quality communication risks being devalued.

Magenta’s research with the University of Sussex revealed that 80% of content writers already use AI tools. But only 20% disclose this to their managers. Why? Well, maybe because deep down, they know that while AI is useful, it isn’t a replacement for genuine skill. AI can mimic tone, but it is not capable of original thought. It can regurgitate facts, or lies wrapped up as facts, but it can’t push the conversation forward. It can structure an argument, but it can’t exercise judgment. Not yet, anyway.

And then there’s the cognitive cost. Studies from MIT and Stanford suggest that over-reliance on automation dulls problem-solving skills. If AI is doing all the thinking for us, what happens to our ability to think for ourselves?

Stop the ‘blame and shame’ game

This sense of underlying shame needs to be addressed. There is no shame in using AI. Disclosure isn’t a confession or an admission of guilt. It’s a mark of professionalism. Being upfront about AI’s role in content creation, whether light or heavy use, ensures transparency. And transparency allows for critical appraisal by those with the expertise to do so.

I use generative AI to support my critical writing efforts. It speeds up the process, but only because I’ve honed a new skill – prompt writing – while safeguarding the ones that matter most: copywriting, editing, storytelling, research, and critical thinking. I’ve been a journalist since 2010, and those fundamentals haven’t changed.

We hear it all the time: AI is coming for our jobs. That’s not true. People who know how to use AI are coming for our jobs. Now is the time to upskill.

The onus is on employers to ensure their teams develop strong AI literacy. In fact, in the EU it is now a legal obligation. Leveraging AI effectively will offer a competitive advantage, but it’s not a silver bullet. Mastering it takes time, skill and effort. Writing a prompt and clicking ‘go’ isn’t going to cut it. The real value comes from knowing how to refine, challenge and apply AI’s output with strategic thinking and ethical oversight.

The erosion of trust

Misinformation is nothing new. But AI has made it faster, easier, and more convincing.

Deepfakes, synthetic media, and AI-generated propaganda are already shaking public trust. And it’s only going to get worse. We are entering an era where people fall into three camps: those unaware they’re being misinformed, those who no longer know whom to trust, and those who don’t care either way.

For businesses, the stakes are high. Trust is hard to build and easy to lose. The more AI floods the digital space with low-quality, misleading, or outright false content, the harder it becomes to distinguish fact from fiction. And if audiences start to doubt what they see and read, the credibility of all content – even the good stuff – is at risk.

The ethical vacuum

Despite the rapid adoption of AI in content creation, ethical frameworks are sorely lacking. Our research found that only 15% of professionals have received formal AI training, and 71% said their organisations had no clear guidelines in place.

For those whose companies had issued AI policies, the most common directive was simply: use it selectively. That’s not a policy. That’s a shrug.

AI doesn’t create meaning; it reflects and amplifies the data it is fed. If that data is biased, the output will be too. Without oversight, we risk perpetuating stereotypes, misinformation, and ethical blind spots at scale.

The case for human curation

So, where do we go from here? AI isn’t going away, and nor should it. When used well, it can be an invaluable tool. But it must be guided by human oversight, not used as a crutch.

We have a choice. We can be digital road sweepers, tidying up after AI-generated mess. Or we can be curators, like gallery directors, selecting, refining, and elevating the best exhibition. AI can generate content, but it cannot create meaning. That’s our job.

The businesses and professionals who will thrive in this AI-driven landscape won’t be those who resist the technology, nor those who blindly embrace it. It will be those who learn how to wield it while reinforcing the qualities that make content valuable in the first place: creativity, originality, credibility and authenticity.

Author

Related Articles

Back to top button