
The internet is awash with AI generated images, and search is no exception. People looking for everything from homes to clothes, from stock photos to beauty products are seeing real images replaced by AI.
A lot has been written about the ethics of training generative AI models, and using AI to do the work of people. But far fewer column inches have been given to the ethics of showing genAI images to people, the damage that can cause, and how to do it responsibly.
Artificially enhanced photos have been around forever. In beauty and fashion, models have long been photoshopped to have zero blemishes and close to zero waist inches. Housing developments that were a few years off being ready, have virtual photos and videos to show what they would look like.
As a society we got used to these, over time. Photoshopped models used to cause uproar, now it’s the expectation. But they were relatively limited in their impact. The images were either far from realistic, or just touched up here and there.
Now, we have completely artificially generated images. Internet users make a search, and get served an image of something that has never existed. They’re everywhere, and they’re almost impossible to spot.
I run a property marketplace so I know that world well. Homebuyers got used to seeing virtual photos that were obviously not real. But now those virtual photos are more frequent and harder to spot, so we see stronger reactions. Some people don’t mind, others feel intentionally misled. And having dug into it, it’s a non-obvious ethical dilemma across all forms of internet search.
On the one hand, people want to see what is actually on show. They want to see the home that is for sale, the jacket they might buy, or what the lipstick actually looks like. They want accuracy, and they don’t want to feel like they’re being manipulated. They certainly don’t want to buy a product only to discover it’s been inaccurately marketed (and the seller doesn’t want that either).
On the other hand, it is useful to see an aspirational representation of what it could be. I’ve never looked like a model, but it’s useful to see what clothes look like on one. And we all know that food doesn’t actually look like the food marketing, but I still get hungry seeing a slow-motion McDonalds burger.
So, as a publisher how do you navigate this potential minefield? We’ve come up with our recommendations to take care of both sides of the transaction. They all revolve around simply being clear and honest with buyers.
-
Clearly label AI generated photos. Don’t hide it away, be up front with your users.
-
Show them a ‘real’ version too. Show the completely true real life representation of the thing.
-
Keep a human in the loop. You can delegate the task, but you can’t delegate the responsibility. Any errors are still on you, the publisher.
If publishers take those three actions, you’re most likely to keep adding the value of AI generated images, whilst drastically minimising the downsides.


