
The relentless churn of harmful social media content, from the obvious dangers of violence and misogyny to the creeping normalisation of toxic beauty standards and extreme influencers disguised as role models, is reshaping childhoods.
New research has found that, on average, children are exposed to around 2,000* social media posts a day, through platforms such as TikTok, Instagram and YouTube. While parents fear the extremes, children are suffering from the slow-burn effect of daily, cumulative toxicity – “longitudinal overexposure” – the relentless feed of toxic narratives about violence, misogyny, beauty, self-harm and more. It’s not what children go looking for, but what finds them, and delivered through memes and algorithmic recommendations, it gradually reshapes their sense of self and the world around them.
The true impact of harmful social media content
An international survey of thousands of children (and their parents) revealed 77%* of children say social media negatively impacts their physical or emotional health, yet most feel powerless to change it. Children report tiredness (28%), sore eyes (25%), sleep issues (20%), headaches (18%), device addiction (17%), anxiety (14%) and sadness or depression (14%), amongst other negative aspects. Parents are also being impacted, with 68% noting at least one physical or emotional symptom linked to social media use.
In terms of the harmful content children report most, the top was fake news at 24%, followed by hate (23%), violence (22%), body image pressure (22%) and over-sexualised content (20%). The harms parents fear most include abuse (38%), hate (33%) and adult content at 32%.
The findings also showed that neurodivergent parents and children in the UK reported greater physical and emotional harm from social media. Strikingly, neurodivergent children showed an 18% increase in levels of emotional impact compared to their neurotypical peers.
What are the key steps to ensure children remain safe?
Beyond Blocking: We need to act faster, smarter
As a mother to a young girl, her safety and happiness are always at the forefront of my mind. Many parents feel that banning smartphones or blocking social media apps is the only way to protect children. I don’t blame them, I’d do anything to keep my daughter safe and preserve her innocence as long as possible. Yet I firmly believe that fear-driven censorship is not the solution.
Age restrictions and regulation are essential (albeit slow and difficult to implement), and “big tech” must be held accountable for some of the harms their platforms enable. However, our focus as technologists, parents and experts should be on acting fast and smart, enabling education and conversation and equipping families with the tools they need to foster healthier digital experiences. Empowering children to navigate the online world safely is far more effective and sustainable than simply restricting access. Ultimately, the majority of our children will end up using some form of social media, gaming or chat platform.
Harnessing AI to Support Smarter Online Habits
As part of the team behind a new AI-driven app launched this month, I’ve seen firsthand how technology can empower families beyond restricted access. Building on the principles of acting fast and smart, the app combines peer-reviewed research, expert insight (journalists, academics, and online harms specialists) and proprietary research studies to understand what children are really seeing online, while giving parents the tools to foster healthier digital habits.
Unlike traditional tools that rely on blocking or banning, this app works collaboratively with families to educate, support, and empower children as they navigate the digital world. At its core is the proprietary Trust AI Engine, built on state-of-the-art multimodal large language models (LLM) that analyses text, video, and other media formats. It can identify 36 types of harmful content, from oversexualised posts and unrealistic beauty standards to glamorised substance use, disguised violence, and hate speech. When content is flagged, the system guides users on retraining algorithms and making informed choices, addressing what we call “longitudinal overexposure” – the slow accumulation of harmful material over time.
The engine dynamically personalises the experience according to age, gender and country, with neurodiversity support in development. It provides accessible guidance on the emotional and physical effects of content, alongside practical resources to mitigate harm. Users gain real agency, influencing the algorithms that shape their feeds, reducing exposure to harmful material.
The Trust AI Engine evolves continuously. It learns from the shifting language of digital culture, including slang, emoji and hashtags. For example, if a hashtag such as #EdSheeran, has been used as coded language for eating disorders and self-harm, the engine can recognise the context and offer proactive guidance. Simultaneously, it develops a nuanced understanding of content each user genuinely values, surfacing more affirming, enjoyable and trustworthy material over time.
Working Together for Safer Digital Futures
Social media in 2025 has changed dramatically in the last decade. If the current trajectory of innovation continues, and if major technology companies remain largely unchecked, society faces a critical imperative. We must establish a comprehensive ecosystem that brings together researchers and academics, regulatory bodies, educators, and parental organisations, underpinned by fresh, creative and technological thinking.
The introduction of the Online Safety Act in the UK marks a meaningful step forward, yet no single measure – be it restrictions, bans, or parental controls – can solve the problem alone. More must be done to empower digitally savvy young people to make safer choices. No individual or institution can or should shoulder the challenge of online safety alone. Artificial intelligence and deep analytics can guide parents, experts and regulators on that journey.
By harnessing emerging technologies responsibly, we can ensure they are deployed in ways that preserve individual agency, maintain oversight without resorting to the blatant invasions of privacy seen in some well-known parental control software, and, above all, safeguard younger generations. My child is growing up fast, so please – let’s all get our act together and start collaborating.



