
With the rapid increase in the use of AI, there has been intense focus on the EU AI Act and what it will mean for organisations providing or deploying AI. However, even where an AI system falls outside the scope of the AI Act, or is only partially within scope, it will also raise significant data protection issues under the UK and EU GDPR (referred to collectively in this article as the GDPR) if any inputs or outputs to the AI system include personal data.
In practice, most meaningful AI deployments are likely to interact with some level of personal data at some point in their lifecycle, for example during training, testing, monitoring or live use. AI technology also has features such as large‑scale data ingestion, complex model behaviour and automated decision‑making that can increase data protection risks.
Organisations that wish to deploy AI responsibly, and to avoid delays or the derailment of projects, will typically need to update their data protection compliance frameworks, regardless of whether the AI Act applies to a particular system.
Based on our work with clients, we see six key areas for action.
1. General Compliance: Refreshing the Foundations
Many organisations already have mature data protection programmes. However, the deployment of AI can expose gaps in those frameworks.
One important step is to update records of processing activities (ROPAs) to reflect new AI‑related processing of personal data (Article 30 GDPR). This typically requires data mapping of the relevant processing, including details such as the categories of personal data processed and the associated data flows. Importantly, AI systems frequently also generate new personal data that needs to be accounted for in ROPAs.
General data protection policies should also be tailored to reflect AI‑specific risks. This may include:
- explaining how AI affects the exercise of data subject rights such as access, rectification and objection (Articles 12 to 22 GDPR). On a basic level, this may mean that AI systems need to be searched alongside other systems (such as email systems) in response to a data subject request;
- addressing fairness and transparency, including where the AI carries out automated decision‑making (Articles 5(1)(a), 13, 14 and 22 GDPR); and
- clarifying how AI‑generated inferences that constitute personal data are treated, including applicable retention and access requirements.
Traditional policies often assume relatively simple, linear data flows. AI systems, by contrast, may involve multiple interconnected systems to generate an intended output, such as CRM systems, HR systems, email archives, document management systems, public web data, and telemetry or usage data from deployed products. The picture can become all the more complicated where AI agents, that can act more autonomously, are involved. Reflecting these flows in policies and internal guidance can help support compliance with the core GDPR principles, such as transparency, accountability and fairness (Article 5 GDPR).
2. Project Launch: Getting the Legal Framing Right from the Start
AI projects often move quickly, driven by competitive pressure and enthusiasm to embrace this powerful new technology. If data protection is bolted on late in the process, it can cause significant delays or even force a redesign.
At project launch, organisations should, as early as reasonably possible:
- map data protection roles and responsibilities (controller, joint controller or processor) across the AI lifecycle, including any third‑party vendors or group entities involved in training, testing or hosting models; and
- identify the lawful basis for each processing activity (Articles 6, 9 and 10 GDPR), both for training and for live use. For many generative AI projects that involve the extraction and use of personal data from public sources, legitimate interests (Article 6(1)(f) GDPR) are likely to be central, which makes robust, well‑documented legitimate interest assessments (LIAs) particularly important.
Where AI is used to make, or support, decisions that have legal or similarly significant effects on individuals, the specific rules on automated decision‑making and profiling will need to be considered from the outset.
In parallel, organisations need to prepare data protection impact assessments (DPIAs) for “high‑risk” AI use cases under Article 35 of the GDPR. In practice, DPIAs on AI projects can help to flush out the details of the processing activities involved, even if there is an argument that they are technically not required. DPIAs should identify and assess risks such as discrimination, lack of transparency or over‑reliance on automated outputs, and set out mitigation measures. Those measures should then be embedded into project governance so that they are implemented in practice rather than remaining purely on paper.
3. Contracts: Building Data Protection into AI Procurement
AI deployment is increasingly dependent on third‑party vendors, for example providers of foundation models, APIs, cloud infrastructure or specialist tools. Data protection considerations need to be integrated into both pre‑contractual due diligence and contractual terms (such as where a vendor is appointed as a processor under Article 28 GDPR).
From a data protection perspective, organisations should, before engaging vendors, include due diligence questions specific to AI procurement, including:
- how the vendor trained its models and what categories of data were used;
- how personal data is protected in the vendor’s environment, including applicable security measures; and
- whether and how customer data may be used for model improvement or other secondary purposes, and the applicable legal basis.
Standard terms offered by AI vendors can, in some cases, allocate significant data protection risk to the customer, for example by permitting broad reuse of customer data or limiting audit and transparency rights. Reviewing those terms from a data protection perspective, and assessing whether existing intra‑group data transfer agreements need to be updated to reflect the AI vendor relationship, can therefore be important. Where possible, organisations may wish to develop standardised contractual templates for AI procurement that ensure consistent treatment of data protection issues.
4. Transparency: Explaining AI without Giving Away the Crown Jewels
Transparency is a core principle of data protection law (Article 5(1)(a) GDPR), and privacy notices will usually need to be updated to cover AI‑related processing (Articles 13 and 14 GDPR). This typically includes:
- the personal data inputs to AI systems;
- the nature of the outputs, including any AI‑generated inferences that constitute personal data; and
- where relevant, meaningful information about the logic involved in automated decision‑making, as well as the significance and envisaged consequences for individuals.
Beyond privacy notices, organisations may need to provide information about how AI systems operate, and how data protection risks are managed, to supply‑chain partners, regulators and courts.
At the same time, organisations will often wish to protect trade secrets, privileged material and commercially sensitive information in order to preserve competitive advantage. This will usually require a case‑by‑case analysis in light of legal and regulatory expectations, as well as market practice.
5. Monitoring and Audits: Treating AI as a Lifecycle, Not a One‑Off Project
AI systems are rarely static. Models may be retrained, fine‑tuned or repurposed; data inputs may change; and regulatory expectations may evolve over time. Data protection compliance therefore needs to be monitored throughout the AI lifecycle, consistent with the accountability principle set out in Article 5(2) of the GDPR.
A practical approach is to ensure that data protection issues are incorporated into AI system audits. This may include assessing accuracy, fairness and robustness, and how these factors affect individuals. Risks identified in DPIAs should be tracked, and those assessments should be reviewed and updated as the system evolves, for example when new data sources are added, functionality changes, or the system is deployed in a new context.
Organisations may find it helpful to develop metrics and methodologies to evaluate accuracy, fairness and transparency on an ongoing basis, and to link these to governance processes. Where issues are identified, risk mitigation measures such as human review, model retraining or restrictions on use in specific scenarios will need to be implemented in order to address them.
6. Training and Culture: Making Compliance Everyone’s Business
Technical and legal controls will only go so far if the organisation’s culture does not support responsible AI use.
Staff should be made aware of AI‑specific updates to data protection policies, and data protection issues should be incorporated into AI literacy and training programmes. This is consistent with the obligation to implement appropriate organisational measures both in relation to policies and procedures and data security (under Articles 24 and 32 GDPR respectively).
A focused training programme might, for example, provide:
- role‑specific sessions for data protection officers, legal teams, IT professionals and AI developers, addressing how AI changes the way personal data is processed within their respective areas; and
- regular refresher training to keep teams up to date with evolving legal and regulatory requirements, and to reinforce expectations around issues such as automated decision‑making, data minimisation and security.
A strong culture of accountability and awareness can make the difference between AI projects that are quietly shelved due to compliance concerns and those that deliver sustainable long‑term value.
Conclusion
AI deployment is a present‑day reality for many organisations. While the EU AI Act and other emerging AI‑specific regulations are important, they sit alongside, rather than above, existing data protection regimes.
By addressing the above six practical areas—namely general compliance, project launch, contracts, transparency, monitoring and audits, and training and culture—organisations can build a framework for data protection compliance in AI deployment that is aligned with the GDPR. This can reduce legal and regulatory risk and help build the trust that is essential if AI is to deliver on its promise.

