AITopics

As the White House ploughs forward with pro-AI policy, new concerns arise 

Managing how children access online information is an ongoing challenge. While parental controls offer some degree of protection on what children can access on a particular device, they aren’t watertight.

Meanwhile, minimum age requirements for certain websites or apps have proven easy enough to circumvent. 

The UK’s Online Safety Act, which came into force this year, is one of the most ambitious attempts to shield children from harmful material. However, critics suggest the legislation will infringe on free speech and create an online space that is ultimately controlled by the government. 

It has also sparked transatlantic conflict, with US tech companies voicing opposition, and the Trump administration weighing countermeasures.   

However, the question of how technology impacts children can’t be brushed under the carpet. Meanwhile, the rapid evolution of AI is complicating the situation further. 

Just this week, Reuters released a shocking report that suggested Meta allowed its AI chatbot to”engage a child in conversations that are romantic or sensual,” raising serious concerns. 

AI safety and privacy for children 

The swirling controversy around Meta’s AI chatbot highlights unforeseen safeguarding risks associated with the use of the technology. 

Senator Josh Hawley (R-Mo.) believes “This is grounds for an immediate congressional investigation.” 

The White House moved ahead with a bill to Advance AI in education in April 2025, to “foster early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators.” 

Yet the push to bring AI into the classroom creates new privacy concerns in addition to safeguarding risks. 

According to Axios, “Chatbots can expose troves of personal data in ways few parents, students or teachers fully understand. Many teachers are experimenting with free chatbot tools. Some are from well-known players like OpenAI, Google, Perplexity and Anthropic. Others are from lesser-known startups with questionable privacy policies.”

This means schools could unknowingly put student privacy at risk in efforts to keep pace with innovation. 

In addition, it’s not clear how student interactions with AI feed into the training of algorithms. 

Advancing AI in education appropriately  

AI’s impact on educational standards and quality is another hot topic of discussion. 

When it comes to AI in education Charlie Sander, CEO of ManagedMethods, is keen to point out that AI skills are already critical. In what he terms as the first “AI-native” generation, the arguments for removing AI from education entirely puts children at a disadvantage and fails to recognize the skills that we needed a decade ago are at risk of becoming obsolete. 

Charlie Sander

“Terms like ‘brain rot’ paint a picture of the fear many have about the risk of spending too much time online. Indeed, there is a growing narrative exploring the question of whether or not AI will erode our critical thinking skills.”

If AI is causing a trend of “cognitive offloading,” it is key that we teach students how to train critical thinking skills in other ways.

Looking ahead, it’s likely that the rise in AI will fuel a change in assessment frameworks. It will be important for education organizations to adopt a new standard to address this increasing challenge, he explained. 

Don’t overlook AI’s positive impact 

Although unregulated AI can create safeguarding risks for children, these should be weighed against the positive impact. AI can also enhance learning and education as demonstrated by bespoke tools. 

According to Mila Smart Semeshkin, CEO of edtech platform Lectera, “the use of artificial intelligence is also a powerful tool that can greatly enhance the adaptability of any course.”

“For example, AI-driven platforms can help generate content that is specifically tailored to the needs of each student, allowing them to keep up with the pace of the class while still meeting their individual learning requirements,” added the executive.

Munich-based EDURINO fuses educational games, ergonomic input tools and animated characters to teach both classic and future skills such as reading, logical thinking and coding to all age groups, while French startup PyxiScience is showing how AI can be used to enhance the study of mathematics.

Buddy.ai, the world’s first conversational AI tutor for kids, is democratizing access to English language tutors and recently demonstrated its commitment by recognized as officially fully compliant with the Children’s Online Privacy Protection Act (COPPA). Said its CEO Ivan Crewkov at the same, “attaining kidSAFE and COPPA compliance was important to us as parents — we wanted to build AI technology that prioritized safety and privacy complemented by design.” Other firms such as Nisum have been partnering-up to turn the promise of AI into tangible results in a whole range of areas.

Ultimately, according to Sander, AI is a reality today. It is important for students to be trained to use it like any other tool. 

Once the initial knee-jerk fear around AI subsides, educators will need to adjust to use the tool, with a keen and critical mind to prepare students adequately for the future of work. While we used to speak about the digital native generation, today’s students will become the AI-native generation. 

“We need AI in the classroom to help students become innovators and equip them with a new kind of critical thinking. This demands us to be real about the benefits and risks of technology in the classroom,” concluded the executive. 

Author

Related Articles

Back to top button