EducationRegulation

AI in Education: Balancing Innovation with Regulation

By Stephan Geering, Deputy General Counsel and Trustworthy AI & Global Privacy Officer at Anthology

The AI revolution is here. Looking at 2025 and beyond, these years will be remembered not only for AI’s rapid advancements but for how we choose to regulate and shape its role in society. Nowhere is this challenge more important than in education, where AI promises to revolutionise learning, making it more personalised and inclusive.

However, as we embrace these possibilities, we must also confront the ethical dilemmas, biases, and risks that come with AI. The future of learning depends on striking the right balance – leveraging AI’s capabilities while ensuring that ethical safeguards and regulations guide its use. The question isn’t just what AI can do for education but how we ensure it does so safely and responsibly.

How AI is Changing the Education Sector: For the Better

Technology has always sparked debate in education – just look at the controversy over calculators in the 1970s. Critics feared they would weaken analytical skills, yet today, they’re essential learning tools. Similarly, AI is reshaping education, and instead of resisting it, we should focus on harnessing its benefits while minimising the risks.

Universities are already embracing AI responsibly. The Russell Group in the UK advocates for AI literacy, and institutions such as The University of Leeds, have integrated AI tools into their Virtual Learning Environment (VLE) to assist with course structure, assessment design and grading rubrics – keeping educators in control while driving engagement and improving efficiency.

Educational institutions are also increasingly adopting AI-driven interactive learning tools that enable students to engage in dynamic discussions with virtual personas. These AI-powered figures can range from historical heroes to fictional characters. For example, a digital Isaac Newton persona can ask students about the principles of gravity, while an AI-powered Emmeline Pankhurst might question them about the British suffragette movement. These tools push students to explore topics in more depth, with the AI persona providing prompts that require learners to develop and refine their ideas. This teaches lateral thinking skills, which are highly valued by employers. Further, each task ends with a reflection question, encouraging them not only to assess their own performance but also to consider the role of AI in the discussion.

Reflection questions encourage students to evaluate the accuracy of information, consider the implications of AI-generated content, and recognise potential biases. In doing so, learners improve their AI literacy skills and develop a deeper understanding of ethical AI engagement.

AI isn’t about replacing teachers but supporting them. By automating administrative tasks, it frees up time for lesson planning, mentoring, and greater student engagement. At The University of Leeds, 95% of instructors found AI tools in their VLE saved them time, and 92% said they inspired new teaching approaches.

Navigating the EU AI Act

Ed-tech companies, educational institutions and other stakeholders are working towards compliance with the EU AI Act. The law applies to all organisations – public and private – offering AI-related products or services within the EU.

Designed to safeguard fundamental rights, including privacy, non-discrimination and freedom of expression, the Act introduces clear legal guidelines that shape how AI should be developed and deployed responsibly. Now more than ever, organisations must adapt to align with these new ethical and safety standards.

Ensuring Human Oversight

Maintaining human control over AI is essential. Educational institutions should appoint senior leadership to oversee AI governance and ethical use. Ed-tech companies can facilitate responsible adoption by providing detailed information about their AI-driven features and allowing institutions to opt in to AI-driven features, ensuring educators retain control over their use.

Placing Transparency and Inclusivity at the Centre

An inclusive approach is key to ethical AI integration. Institutions should involve faculty, staff, and students in policy discussions, forming advisory groups to guide AI strategy.

Communication about AI policies effectively is equally important. For instance, regular updates and clear documentation help all stakeholders understand how AI tools function, as well as their opportunities and risks

Building AI Literacy

For AI to be used effectively, institutions must prioritise training. Staff and students should receive ongoing education on AI’s capabilities, risks, and ethical implications. Role-specific training can ensure different teams – especially those in areas such as, IT, legal, security, or data governance – understand the technical capabilities, risks and compliance responsibilities.

Strengthening Data Protection

Institutions should establish clear policies on AI tool usage, opting for enterprise-grade AI solutions that generally provide enhanced data protection and avoid institutional data being used for training AI models.

Strategic Steps for EU AI Act Compliance

Complying with the EU AI Act may seem daunting, but organisations can take clear, strategic steps to align with its requirements. Here’s an overview of key actions to ensure compliance:

  1. Develop a Comprehensive AI Framework – Establish policies that align trustworthy AI governance with EU regulations.
  2. Inventory – Identify and classify all AI tools used within the institution to determine compliance obligations.
  3. Assess Compliance Gaps – Conduct risk assessments, especially for ‘high-risk’ AI systems, and implement necessary safeguards.
  4. Strengthen Vendor Due Diligence – Review third-party AI providers to ensure their systems comply with EU standards.
  5. Monitor Regulatory Updates – Assign legal or compliance teams to stay updated on evolving AI regulations and guidelines.

Looking Ahead: Striking a Balance

We are in the era of AI, and it holds much promise. The next few years will be critical in ensuring that educational institutions and ed-tech providers strike the right balance between innovation and responsible use, ensuring that AI is implemented safely and transparently.

Author

Related Articles

Back to top button