AI

AI in English language education: why we need an ethical framework

By Dr Evelina Galaczi, Director of Research, English at Cambridge University Press & Assessment

Like any industry, the education sector is seeing huge potential in the rapid growth and adoption of AI. In fact, a recent report by Microsoft found that 86% of education organisations have used GenAI. English language education is also seeing huge potential benefits from this groundbreaking technology and is experiencing change at a rate like we’ve never seen before. From English teachers using AI to enhance the classroom experience to test developers integrating the technology into their exams, it can bring enormous benefits if used ethically and with a human in the driving seat.  

Don’t forget the human aspects of intelligence

The latest AI tools, such as ChatGPT, Copilot or Gemini are very confident in producing responses to prompts and it’d be easy to forget that they are, actually, just machine that have a lot of data. Information and data, however, is not knowledge or intelligence. We mustn’t forget that human intelligence is much more than powerful computers producing responses to prompts.

Professor Rose Luckin, a highly respected AI expert, has put this really well in  one of her recent books: “AI systems speak with authority from a position of ignorance and in the same way that we must be cautious when we meet humans with an excess of such hubris, we must be even more careful when our AI behaves in such ways, because our AI can be scaled in a way that arrogant people cannot”. 

Don’t dismiss the human aspects of communication

You don’t have to look too far for a great example of how the technology is impressive but has limitations and cannot entirely replace humans. I’ve been involved in English language learning and assessment for more than 30 Years.  I have worked at the forefront of changing technology and have seen many different approaches to learning and assessing English.

I recently tried AI-powered translation technology for a chat with a friend – I was speaking Bulgarian, and they were speaking in English. Whilst it’s an impressive piece of technology, it’s important to remember that communication is not just a simple transaction of words, but it also has lots of additional aspects such as gestures, nods and body language – which are at the heart of making the social and emotional connections that a real-world conversation has. It’s these tiny aspects of conversation – that are actually not tiny in terms of their effect – that show understanding (or even misunderstanding!) and often can keep the conversation flowing.

How can we deliver ethical AI in English language education?

So, despite the buzz and obvious benefits AI brings to English language learners, teachers, test takers and institutions around the world – there are still lots of unanswered questions and challenges. Perhaps the biggest question of all is: how can we deliver AI ethically? Or in other words, how can we develop AI solutions for English language education that have integrity and that people trust?  

Six key principles for ethical AI 

The answer lies in working in line with an ethical framework – because without this, AI runs the risk of losing credibility in English language education or even worse, damaging it. My colleague Dr Carla Pastorino-Campos, put this nicely when she recently said:

“The English language learning and assessment sector shouldn’t be afraid of AI – but it’s essential to understand more about the risks associated with the technology and understand how we can deliver it ethically.”

We’ve defined six key principles for the ethical use of AI in English language education. These centre around a human approach to AI, because it’s critical to acknowledge the important role of the human in both language attainment and quality assessment. Here’s a closer look at the principles in practice:  

  1. AI must consistently meet the standards of human test developers and examiners
    AI systems must accurately assess the right language skills and deliver results that people can trust. The technology should enhance the integrity of what’s being measured and not be used to cut corners. This is essential when AI is used for high-stakes English tests for admissions or immigration purposes. We urge test providers to collect robust evidence to show how AI-developed content for tests and AI scores meet the same standards as highly skilled and experienced expert developers and examiners.
  2. Fairness isn’t optional – it’s foundational
    AI-based language learning and assessment systems must be trained on inclusive data to ensure they are fair and free from bias. Along with using diverse data sets it’s essential to continuously monitor for bias and involve a wide range of stakeholders throughout the test design process. 
  3. Data privacy and consent are non-negotiable
    By ethically collecting and leveraging data, we can improve the learning and assessment tools we offer. All parties must be clearly informed about what data is collected, how it’s stored, and what it’s used for – and they must actively give consent. Behind the scenes, this means implementing robust encryption, secure storage protocols, and safeguards against hacking. It also means that Intellectual Property is respected and regulated. This robust approach helps us to develop quality AI language learning and assessment tools that users can trust.
  4. Transparency and explainability are key
    Learners need to know when and how AI is used to determine their results. AI systems must be developed and deployed transparently, with robust oversight and governance. Providers must be able to clearly articulate the role AI plays, as well as the frameworks that are in place to ensure test integrity and accuracy. In high-stakes exams, that means that any marks determined by AI must be justified when needed to relevant stakeholders.  It also means that a hybrid approach, where AI and human experts are used, might be the best.  
  5. Language learning must remain a human endeavour
    While AI can enhance learning, it cannot replace the uniquely human experience of acquiring and using language. Ethical AI in education must support and empower learners and teachers, not overshadow the human touch that makes language meaningful. AI-based assessment must always keep a human in the loop. This helps to establish accountability on the part of test providers, and allows a human to step in where oversight, clarity, or a correction is needed for quality control.
  6. Sustainability is an ethical issue
    AI isn’t just a digital tool – it’s a physical one, with real-world environmental costs. AI systems crunch vast amounts of data and have massive energy needs, which places a big responsibility on everyone, including language providers. This must be kept in mind when choosing which of the different types of AI tools should be developed or used. It’s important to ask: is this AI system necessary, or are there ecologically friendly and more sustainable options available?

 So how do we put these principles into practice?

The reality is that AI in education – and in many fields – lacks consistent regulation. This means it becomes the responsibility of us, not only as leaders and innovators but also as human beings, to ensure that we keep the best interests of our intended audience top of mind when innovating with AI.

It’s essential that we collectively work to robust standards when setting an ethical framework. This will ensure the delivery of solutions that are safe and trustworthy. In English language learning and assessment, this means most importantly ensuring a human-centred approach to AI – and one which adds proven value.

We must keep up with the pace of change. As AI continues to evolve at an unprecedented rate, so must we. As we navigate this, one thing is clear; AI is here to stay, but it cannot replace human expertise – it’s when the two work together we get the best results.   

Author

Related Articles

Back to top button