
The recent controversy surrounding Elon Musk’s Grok AI highlights a long-standing tension between technological progress and ethical boundaries. While society has struggled with doctored images for decades, Grok’s image-generation capabilities have opened new paradigms, moving beyond simple “retouching” to the automated creation of non-consensual content. To understand why this specific AI model has triggered such intense global backlash and regulatory scrutiny in 2026, one must first look back at the origins of digital deception and the tools that paved the way for this modern crisis
A History of “Faking It”
In the early 20th century, artists like Alexander Rodchenko used physical “cut-and-paste” techniques to create impossible photo montages, long before computers became mainstream. The technique was used extensively in politics, where figures were airbrushed out of official Soviet photographs to “correct” history. These early methods required physical skill and darkroom expertise, which the artists used to alter reality for artistic or political ends.
The Birth of Digital Tools
Cut-and-paste methods evolved in the late 1980s and early 1990s with the rise of digital software. Adobe Photoshop, launched in 1990, made professional-grade retouching available to anyone with a Macintosh computer, effectively turning “Photoshop” into a verb. Around the same time, CorelDRAW popularised vector-based editing, enabling precise, scalable digital design. By the mid-90s, “digital morphing” became a cultural phenomenon, seen in everything from Michael Jackson’s music videos to Hollywood blockbusters. However, manipulation remained a “high-friction” task, requiring expensive hardware and hours of manual labour to achieve realism. This limited “digital morphing” primarily to high-budget cinema.
The Great Acceleration
The fundamental mechanics of digital deception changed in 2014 with the arrival of Generative Adversarial Networks (GANs), an AI based technique which allowed machines to “imagine” and generate entirely new photos by pitting two neural networks against one another. By 2017, this led to the rise of “deepfakes,” marking the first time AI-driven face-swapping was used to target individuals without their consent.
However, the true turning point for the public occurred in late 2022 with the release of ChatGPT. Its overnight success sparked massive global interest in all kinds of artificial intelligence, leading tech companies to race to build consumer-grade products. This era saw the launch of powerful ‘diffusion models’ like DALL-E and Stable Diffusion, which allowed anyone to generate hyper-realistic imagery from a simple text prompt—though these platforms generally maintained stricter ethical guardrails than what would later be seen with Grok.
The Rise of the Grok AI Scandal
The explosive growth of the AI market prompted Elon Musk’s X to build its own AI chatbot, Grok, which was integrated into the platform. Over time, Grok evolved and added image-generation features. However, it recently sparked global outrage due to its “image-editing” and “spicy mode” features, when it was discovered that users were using the tool for a “digital undressing spree,” targeting both celebrities and private individuals.
Documented Abuse and the Global Response
Research from groups like AI Forensics suggested that Grok was generating thousands of sexualised images per hour at the height of the trend. Users were uploading photos of real people and asking the AI to place them in revealing clothing or “undress” them entirely. Following an investigation by the UK regulator Ofcom and condemnation from the EU, xAI was forced to implement geo-blocking. On 15 January 2026, X announced it would officially prohibit Grok from editing images of real people into revealing attire to curb this non-consensual content.
Why AI Differs from Photoshop
When a person uses Photoshop to create a fake image, the software is seen as a neutral tool, like a pen in an artist’s hand. However, Grok is different because it automates the entire creative process. Furthermore, because Grok is hardwired into a social network, it allows for the instant creation and viral distribution of harmful content on the same platform where the images are generated. Critics argue that Grok’s “free speech” ethos lacked the necessary safety filters found in more established tech companies.
Current Standing and Regulation
As of mid-January 2026, governments in Malaysia and Indonesia have blocked Grok entirely, and the UK could issue fines of up to 10% of X’s global revenue if it fails to comply with the Online Safety Act. While xAI is currently updating its filters to prevent further abuse, the episode has reignited a global conversation on whether AI companies should be held legally responsible for the outputs their models produce, and whether they should be allowed to self-regulate.


