
As students and educators begin preparing for the upcoming exam season, concerns around AI misuse are resurfacing with renewed urgency. The latest HEPI survey found that AI use is now almost universal. 95% of students report using AI in at least one way, demonstrating how quickly the technology is becoming embedded in academic practice.
While much of this discussion has centred on generative AI tools like ChatGPT and their influence on written assignments, a more pressing threat is emerging in higher education’s online assessments: agentic AI.
These systems are no longer limited to helping students draft or refine work, but could even independently complete and submit academic tasks on their behalf. By bypassing browsers and other tools, they leave behind minimal behavioural traces. This development fundamentally challenges the assumption that digital assessments can reliably measure an individual student’s ability.
As the next exam cycle approaches, these risks become even more pronounced. The same agents capable of producing coursework can also sit online exams end-to-end, further undermining trust in digital assessments. This means educational institutions have the opportunity to strengthen their approach towards this new phase of AI-enabled misconduct and ensure they have parameters in place to protect both the learning and assessment process.
How AI student agents challenge digital assessments
AI agents differ fundamentally from traditional AI misuse. Instead of a student opening an AI tool during an exam, these agents operate autonomously in the background, impersonating and supporting students throughout the assessment.
These agents can generate answers, display them through on-screen overlays or clipboard content, and mimic legitimate test taking behaviours, without revealing any visible AI interface. Because they run outside the browser and require an active internet connection, they can evade many existing proctoring and exam security measures.
This presents a new challenge for institutions, which now need to balance the benefits of digital assessment platforms, such as improved access, flexibility, and reduced grading workload, against the risks that AI agents pose to academic integrity and institutional reputation.
However, this does not have to become a trade-off.
Steps to secure digital exams from AI student agents
To protect exam integrity without abandoning digital assessments, educational institutions can implement the following three steps to safeguard digital exam integrity in the age of AI agents:
1. Implement offline summative assessments
One of the strongest defences against AI student agents is to deliver high-stakes digital exams offline. AI agents require computational power, access to exam content and an active internet connection to function. When exams are delivered offline, these agents are unable to receive questions or transmit answers, making offline systems a secure option for protective summative assessments.
Institutions can begin by auditing their current exam portfolio to identify which assessments rely on internet connectivity. High-stakes, credential-defining exams should be prioritised for offline delivery. For assessments that must remain online, institutions should strengthen their security with measures such as AI-aware proctoring and enhanced monitoring.
2. Enforce full device lockdown, not just browsers
A secure browser no longer guarantees a secure exam. AI capabilities are increasingly embedded at the operating system level, meaning students may have access to AI tools even when their browser is locked down.
To mitigate this risk, institutions can benefit from restricting activity across the entire device, not just within a browser window. Full device lockdown prevents internet access, blocks external communication, restricts unauthorised applications and disables system-level features such as screenshots.
Institutions may also want to consider assessing whether their exam platforms can control the full device or only the browser. If background applications, OS-embedded AI tools, or system-level features remain accessible, the exam environment is vulnerable to AI-assisted misconduct.
3. Protect exam content with advanced encryption practices
Even in an offline environment, exam data may still be at risk if it is transmitted or stored in an unencrypted format. Advanced AI tools may attempt to inspect files or intercept data where possible. Strong encryption ensures that exam questions and student responses remain unreadable outside the secure exam application and can only be accessed using appropriate credentials.
Institutions should consider adopting digital exam platforms that provide end-to-end encryption for all exam content, from the distribution of exam questions to the collection of completed responses. This can help prevent exam content from being intercepted before the assessment and protects student responses from tampering after submission.
Future-proofing digital exams
As agentic AI becomes more deeply integrated into education, institutions have an opportunity to safeguard the value of digital assessment while guarding against new risks. AI student agents have made it clear that traditional browser-based safeguards are no longer sufficient, and that maintaining exam integrity now requires a shift in how assessments are delivered and protected.
By moving summative assessments offline, securing full device environments and encrypting exam content end-to-end, institutions can reduce the risks associated with AI student agents while retaining the efficiency of digital exams. Taking these steps ensures that assessments remain reflective of genuine student learning.
