DataEthicsFuture of AI

Navigating responsible AI and data protection

By Lauren Wills-Dixon, solicitor and head of privacy, Gordons

We all know that the rapid advancement of AI is transforming industries and unlocking new efficiencies. Organisations across multiple sectors are using AI to improve forecasting, make more informed decisions, automate tasks and much more. From healthcare to finance, manufacturing to retail, the opportunities are seemingly limitless.

However, the accelerated adoption of any new technology inevitably comes with additional challenges, and one of the biggest in AI is data privacy. Does the training of AI models require trading privacy rights?

At the core of AI innovation, particularly generative AI, lies the need for vast amounts of data, and often, the ingestion process cannot remove ‘personal data’. AI models thrive on high-quality, diverse datasets to learn patterns, make predictions and drive automation. However, the more data these models use, the greater the risk of privacy breaches, intellectual property issues, data misuse and ethical concerns.

The question is how organisations can harness AI’s potential without compromising data privacy and protection.

Data protection risks in AI

Data protection compliance and the use of AI are not automatically mutually exclusive. Like any software, AI systems are vulnerable to hacking and data breaches, especially when dealing with large datasets. This is an ongoing concern. The latest Government figures show that half of UK businesses have experienced some form of cyber security breach or attack in the last 12 months. The figure is much higher for medium (70%) and large businesses (74%).

Another risk is the lack of transparency, particularly as AI can be highly invasive and scrape data without the consent or knowledge of the individual whose data it is. For example, some AI systems can predict and build a detailed profile about an individual’s present and likely future behaviour.

This can often occur without an individual even knowing it’s happening, raising ethical concerns about transparency and consent. Given the personal data element, the issue becomes not just about the laws that apply to the use of AI but also about the individual privacy rights people are afforded under GDPR.

Additionally, organisations are increasingly deploying AI solutions developed by third parties. It is crucial as an employer and ‘data controller’ to consider company standpoints on the use of GenAI. This includes an AI policy, which is important to mitigate privacy risks. It dictates what is acceptable regarding the input of confidential information or personal data into GenAI, as well as using the output of any GenAI for certain internal and external business purposes which could be infringing.  Companies must innovate to remain competitive, but they must also ensure compliance with stringent data protection regulations.

The legislative landscape

The first point to make about AI and data protection is that, while the law around AI is constantly evolving, there is currently no specific AI-focused legislation. Instead, the UK’s regulatory landscape for AI use is centred around a pro-innovation approach, based on sector-specific guidance and generally adopts a tech-agnostic stance.

The UK General Data Protection Regulation (UK GDPR), the Data Protection Act 2018 (DPA 2018), the Equality Act 2010 and various consumer rights laws largely govern AI’s use, particularly when it involves automated decision-making and personal data processing. However, while the UK GDPR and DPA 2018 address risks associated with large-scale automated processing of personal data, neither explicitly mention AI, and regulators are left to interpret how the GDPR applies to new technologies. For example, the Information Commissioner’s Office in the UK has issued guidance on how important it is to consider individual privacy rights.

For companies operating in the EU and/or processing personal data about people in the EU, the EU AI Act will apply to any activities involving the development and/or deployment of AI solutions. The EU AI Act categorises AI into minimal, limited, and high-risk systems, each with rules and requirements to remain compliant.

Ethical standards

AI and data protection go beyond legal requirements. The challenge for adopters will be meeting evolving regulatory expectations, establishing ethical standards, and protecting consumer rights (where applicable) without hindering innovation as AI use increases across the organisation.

Even where organisations are meeting regulatory requirements about data privacy and protection, there are risks associated with AI and poor data protection standards.

One study by KPMG in 2024 found that 63% of consumers were concerned about the potential for generative AI to compromise an individual’s privacy by exposing personal data to breaches or through other forms of unauthorised access or misuse.

Organisations that fail to prioritise privacy risk reputational damage and erosion of consumer trust, as well as penalties under the UK GDPR.

Maintaining trust

As you might expect, innovation is driving progress, with new ways of supporting more responsible AI use regarding data privacy and protection.

  • Synthetic data (artificially generated datasets that mimic real-world data) supports machine learning without exposing personally identifiable information.
  • New privacy techniques and enhanced encryption measures, particularly in financial services, allow organisations to analyse data anonymously.
  • In retail, zero-party data – where a customer intentionally shares information, often in exchange for a discount, such as with loyalty schemes – allows brands to personalise the shopping experience without relying on third-party tracking.

Some developers are also embedding privacy into AI development, adopting a privacy-by-design approach which prioritises security and ethical considerations from the outset. It’s all about maintaining trust at every touchpoint.

Ultimately, organisations that prioritise responsible AI use will mitigate risks and gain a competitive edge. Consumers are becoming increasingly aware of their digital rights.

Consequently, if AI is already driving competitive advantage, then responsible innovation – maximising AI with embedded data protection – will only accelerate the benefits.

Author

Related Articles

Back to top button