
The rise of artificial intelligence (AI) is having a profound and contradictory impact on the UK’s youth. It offers unprecedented opportunities for learning and support, yet simultaneously introduces serious new risks to their mental health, safety, and the development of crucial life skills. As the government and regulators scramble to respond, a complex picture is emerging, defined by rapid adoption, significant dangers, and a pressing need for evidence-based policy.
A Government Inquiry into the Digital Landscape
Recognising the urgency, the Department for Science, Innovation and Technology (DSIT) has launched a major consultation to understand the impact of AI, smartphones, and social media on young people. The consultation, open for input until April 2026, highlights a critical gap: research into the long-term effects of these technologies is significantly underfunded. The government is now seeking views from experts, civic society, parents, and young people themselves on several key areas:
- AI-Specific Harms: How to tackle new threats like AI-generated child sexual abuse material and non-consensual “deepfake” intimate images.
- Social Media Access Bans: The feasibility and effectiveness of a potential ban for under-16s, learning from new laws in Australia.
- Age Verification: Improving age checks (e.g., via facial age estimation) and debating whether the “digital age of consent” should be raised from 13.
- Addictive Design: Restricting features like infinite scrolling and app “streaks,” and the potential for “digital curfews.”
- Phones in Schools: Supporting the new guidance for “phone-free” schools and exploring Ofsted’s role in enforcement.
- Screen Time Guidance: Assessing the usefulness of new, evidence-based guidance for parents.
What the Research Tells Us: A Generation of AI Natives
Recent studies reveal a generation quick to adopt AI, but with concerning side effects. Research from Oxford University Press on the use of AI in UK schools found that while 80% of 13- to 18-year-olds regularly use AI for schoolwork, 62% feel it has negatively impacted their skills and development. One in four admitted it allows them to find answers without doing the work, and 12% said it limits their creative thinking. This points to a growing erosion of independent problem-solving and a threat to academic integrity.
This is echoed by the research conducted by National Literacy Trust, which found that 61.2% of students use AI for homework, and one in four admit to copying the outputs directly. Teachers are on the front line, with 86% agreeing that students must learn to use AI critically, yet 67% feel they lack the necessary training to guide them effectively.
Perhaps most alarmingly, research by the youth endowment fund revealed that 25% of 13- to 17-year-olds have turned to AI chatbots for mental health support, a figure that nearly doubles among teens affected by serious youth violence. Experts warn that these vulnerable young people need human connection, not algorithmic responses.
The Dark Side: Abuse, Exploitation, and Psychological Harm
The dangers extend beyond the classroom. A report from the NSPCC highlights how AI is being weaponised to generate non-consensual intimate images, and to bully, harass, and groom children. The organisation frames this not just as a content moderation failure, but as a fundamental “safety by design” problem with tech companies, causing severe psychological harm and normalising abuse. They are calling for new legislation that mandates a “duty of care” for child safety in AI product design.
The Regulatory Response: A Patchwork of Protections
In response to these challenges, the UK is developing a multi-layered regulatory framework.
- Expanding Mental Health Support: The government is expanding NHS Mental Health Support Teams (MHSTs) in schools, aiming to cover 100% of pupils by 2030. This initiative provides early, accessible support for mild to moderate issues like anxiety, aiming to prevent crises and reduce pressure on the NHS.
- AI Safety Standards for Education: New AI safety standards have been launched specifically for the education sector. These mandatory technical and design standards force edtech developers and schools to prioritise psychological safety and responsible pedagogy, protecting children’s cognitive and emotional development from AI-specific harms.
- Ofcom’s Online Safety Act: The landmark Online Safety Act holds tech companies accountable for harmful content on their platforms. It mandates the removal of illegal content and protects children from material like pornography and self-harm promotion. It has positively reduced pornography site visits by a third due to age-checks and increased the number of children encountering positive measures as per the UK Government. Ofcom has significant enforcement powers, including fines of up to £18 million or 10% of global revenue. Ofcom recently opened an investigation into X’s Grok AI “being used to create and share demeaning sexual deepfakes of real people, including children, which may amount to criminal offences,” a practice that causes deep distress for young people, particularly among young girls.
- The Information Commissioner’s Office (ICO)’s Age-Appropriate Code: Operating under the Data Protection Act, the ICO’s Age-Appropriate Code enforces a “preventative” framework. It requires online services to protect children’s data and safety by design, mandating high privacy defaults, minimal data collection, and robust age assurance. The code has already driven nearly 100 positive design changes globally as per a report on its impact. ICO poses fines of up to £17 million or 4% of global revenue (whichever is higher) for violations.
Charting a Way Forward: Key Recommendations
The UK stands at a pivotal moment. The central challenge is to balance AI’s immense opportunities with its significant risks. Success will depend on moving from reaction to thoughtful, evidence-based action. Key recommendations for policymakers, educators, and tech companies include:
- Prioritise Evidence Over Anecdote: Investment in high-quality, long-term research is not optional—it is essential for crafting effective policy that truly protects young people.
- Acknowledge the Complex Debate: For polarising ideas like a social media ban, policy must be nuanced, considering both parental concerns and expert warnings that bans could simply push harmful activity underground.
- Close the Learning Gap: Systemically integrate human-AI collaboration into schools and workplaces to teach students how to use AI as a tool in a responsible manner, and not as a crutch.
- Prioritise Safety and Equity by Design: Enforce safety standards rigorously and actively fight algorithmic bias to ensure AI benefits all young people fairly.
- Centre on Human Connection: Use AI to augment, not replace, the teachers, mentors, and real-world relationships that are essential for healthy development.
- Consider Practical Implementation: For any new measure, from age checks to banning addictive design, policymakers must clearly outline how it can be implemented responsibly, safely, ethically, effectively, and privately.
- Implement responsible AI regulation in the UK: Ensure a future AI regulation in the UK incorporates protecting vulnerable people online, particularly young people.

