AI & Technology

THE ETHICS OF GENERATIVE AI IN CREATIVE INDUSTRIES

By Carl Lyttle, Head of AI Photography & Chief Digital Officer, nmatic.ai, one of the world's first hybrid AI creative production and technology companies.

Ethics in AI is not a debate. It is a design decision.  

What does responsible adoption of Generative AI actually look like inside a creative 

business, not in mission statements or public commitments, but in the day-to-day decisions that determine what gets made, how it gets made, and on whose terms? It is a question I find myself returning to constantly. Not because the answer is complicated in principle, but because the speed at which this technology is moving makes it genuinely difficult to keep the answer consistent in practice. 

I come from photography. That matters, because it shapes everything about how I approach this creative space. A career spent protecting image rights, understanding what it means for a photographer’s work to be licensed, compensated, and properly attributed, does not switch off when the camera is replaced by a generative model. If anything, it becomes more important. Because the principles do not change. What changes is how easy it has become to ignore them. 

This is my opinion, as a photographer, as a filmmaker, as a Chief Digital Officer, and as someone running a hybrid AI creative studio and technology agency deploying these tools on a broadcast and commercial scale every day, of where the industry currently is, and how we have to be responsible moving forward. It is written for agency leaders, brand-side marketing teams, content creators, photographers, artists, and yes, the generative platforms themselves. All of us are part of this. All of us carry a share of the responsibility.  

THE RIGHTS PRINCIPLE DOES NOT CHANGE  

When I first started working seriously with Generative AI, I expected to find a technology built with the same regard for creative rights that the industry had spent decades establishing. What I found instead was a landscape of genuine complexity. A number of large language models and image generation platforms had trained their datasets on scraped internet content, absorbing the work of photographers, illustrators, authors, and filmmakers without explicit permission or compensation. In many cases, those creators still do not know it happened.  

Whether that constitutes a legal breach is not, in truth, straightforward. These models are not reproducing content, they are learning from it, not entirely unlike how a photographer studies the work of those who came before them. The material was openly available. No legal jurisdiction had anticipated machines learning at this scale, from this volume of material, for commercial purposes. The law simply had not been written for this scenario, and that gap between what was technically possible and what anyone had thought to regulate is precisely where the complexity lives.  

What surprised me was how little ethical framework had been applied before these tools reached the market. Photography, film, and music all have well-established, legally tested frameworks for how creative work is licensed and protected. The expectation should have been that AI would be built within those frameworks. Instead, it was deployed first and asked to justify itself later. 

The creative industries are pushing back seriously. Legal challenges are mounting, and while no single case has yet produced the definitive precedent the sector needs, each one is testing and shaping the boundaries of what is permissible. Some major AI platforms have already begun reaching licensing agreements with publishers, image libraries, and rights holders, an acknowledgement that the creative community’s concerns are legitimate, and the status quo is not sustainable. It is worth remembering that the frameworks protecting photographers and filmmakers today were not handed down fully formed. They were built over many decades through disputes, litigation, and hard-won negotiation. That process was possible because film and digital technology moved at a pace the legal system could, with effort, follow. AI does not afford that luxury, and notably, several of the founders building these systems have said so themselves, calling openly for regulation because they understand that the current vacuum creates long-term risk for everyone. 

SPEED VERSUS STEWARDSHIP 

At nmatic.ai, we are a hybrid AI agency built on craft skills. We are not a technology company that hired some creative people. We are a creative company, photographers, filmmakers, directors, designers, that has integrated AI deeply into how we work. That background shapes everything. 

There is a fundamental tension in this industry between two mindsets. The technology-first mindset is focused on advancement: how fast, how far, how soon. But it tends to treat ethics reactively, a problem to solve if it surfaces, not a foundation to build from the start. The craft-first mindset asks different questions. Where did this come from? Whose work fed this model? Can we demonstrate that what we have produced is legally clean and morally defensible?   

We work within enterprise-grade generative platforms precisely because they ringfence client data. Nothing we put in gets shared. Nothing retrains the underlying model. Beyond that, we build bespoke generative systems tailored to specific client needs, custom-trained models, blended workflows combining AI generation with traditional photography and film production, hybrid pipelines that bring craft-level precision to AI-enabled scale. Enterprise-grade protection at the infrastructure level, bespoke capability at the creative level. That is what responsible commercial deployment looks like. It is not overcaution. It is the baseline. 

ETHICS AT AN IDEA, NOT JUST AT OUTPUT  

The most important thing I would tell any agency working seriously with Generative AI: the ethical checkpoint cannot be at the end of the process. It must be at the beginning. 

In our workflow, that means interrogating every brief for IP risk before a single image is generated. It means agreeing licensing terms before developing any work involving a real person’s identity or likeness, treating every digital twin – digital models of real physical entities – as if that person were an actor on set. This means the same licensing agreement, same compensation, same territorial and media rights, same re-use terms. The fact that we are not pointing a camera at them does not change the commercial value of their identity or their right to be compensated for it. The same applies to locations. A recognisable private property in a commercial AI campaign carries exactly the same obligations as a traditional location shoot. 

When we develop fully synthetic characters, built ground-up with no real-world basis, we still conduct reverse image searches before they go near a client. We track the development process so that if questions arise, we can demonstrate due diligence at every stage. That paper trail is our proof of IP integrity. We have walked away from projects where clients wanted to replicate a photographer’s style without concern for where that crossed a line. We have had briefs not proceed because agency legal teams could not satisfy themselves the IP was clean enough to withstand litigation. That self-regulation is already happening, and it matters, it shows the market beginning to enforce standards that legislation has not yet reached. 

THE SKILLS GAP NOBODY IS TALKING ABOUT 

My photographic background gives me something unique in this space: the ability to recognise another photographer’s work even when it has been passed through a generative filter. I have seen outputs where the subject resembles no specific person, but the pose, the lighting, the precise positioning in the frame, all of it carries the unmistakable visual grammar of a specific image. A non-photographer might not see it. I do, immediately.  

In many cases, this is not the model’s fault. It is the prompt’s fault. When someone writes “in the style of David Bailey” or names any working photographer, they are explicitly directing the model toward that artist’s IP. The infringement risk is not buried in the training data, it is sitting right there in the brief. This points to something the industry is not discussing nearly enough: there is a significant and growing skills gap in how generative AI is being directed. Prompt craft – the request or instruction given to a Generative AI system – s not a minor technical detail. It is the creative and ethical interface between intention and output. Poorly constructed prompts, leaning on named artists or specific visual references without understanding the IP implications are where a large proportion of the real risk lives. The solution is not just better technology. It is better trained, more creatively literate practitioners who understand what they are asking for and what they are responsible for. 

Regulation will help, but it will always lag. What cannot wait is a shared sense of moral responsibility, agencies, platforms, brands, and individual practitioners alike. Legal compliance and ethical behaviour are not the same thing. You can be legally protected and still be doing something wrong. From our position, the answer is straightforward even when it is not easy: everything we produce must be legally bulletproof, ethically defensible, and properly compensated. The people whose IP, identity, or creative work contributed directly to what we make must be recognised and remunerated. Our clients need to stand behind what we deliver. And we need to stand behind it ourselves.   

WHAT THIS IS REALLY ABOUT  

Generative AI is not going away. The question was never whether to adopt it, that conversation is over. The question is whether we build around it with the same discipline and regard for rights that the creative industries have always demanded of themselves at their best. 

The rights principle does not change because technology has changed. A photographer’s work deserves protection whether it is licensed for a billboard or absorbed into a training dataset. A person’s identity deserves compensation whether they stand in front of a camera or are reconstructed by a model. A location, a style, a creative voice, these things have value, and that value belongs to the people who created them. 

The organisations doing this properly have a responsibility to be visible about how and why they work this way. Not to compete on it, but because the standards we demonstrate today are the standards, the industry will be measured against tomorrow.  

It is true that Generative AI can save brands significant amounts of money. The ethical, craft-led version of AI will never be as cheap as the model that ignores rights, provenance and responsibility, but lets consider the possibility that, if you can save 20–60% on your current production spend through hybrid AI workflows, and do so while preserving the craft culture we all depend on to be entertained, engaged and communicated with, is that not enough? Or, do you push for 80–90% savings simply because the technology allows it, by cutting out the very standards that protect the ecosystem? The latter model, the one many of us quietly fear, does not end well for anyone.  

Ethics in AI is not a debate. It is a design decision. And it has to be made at the very beginning, not when the problem has already arrived at the door. 

Carl Lyttle is Head of AI Photography and Chief Digital Officer at nmatic.ai, one of the world’s first hybrid AI creative production and technology companies. He has spent his career at the intersection of photography, image rights, and emerging technology. 

Author

Related Articles

Back to top button