Digital TransformationCyber Security

AI and Privacy: Balancing Innovation with Data Protection

By Ashley Webber, Associate, Hunton

Artificial intelligence (“AI”) is one of the greatest technological advances of recent years, and we have only just begun to see what the potential capabilities are.  Its immensely innovative nature will bring benefits globally to all industries including health, finance, consumer products, cybersecurity, and more.  However, as a technology predicated on data, the development and deployment of AI systems can come at a price, particularly with respect to personal data. Since the EU General Data Protection Regulation came into effect in May 2018, we have seen a huge increase in the number of data protection and privacy laws coming into effect globally.  Many of these laws contain common principles which must be complied with when processing personal data, but as AI systems require the processing of personal data (at times, in very large quantities), compliance with such principles can be challenging.  This article provides examples of areas of tension between common data protection and privacy principles and data processing activities in the context of AI systems, and commentary on balancing innovation with data protection compliance.

Lawfulness

Personal data should only be processed when it satisfies a lawful basis provided for under the relevant law, for example, consent, necessary for the performance of a contract, or legitimate interests.  While these bases may differ under applicable law, there are certain considerations which are mutually relevant.  For example, as confirmed by the European Data Protection Board in its Opinion 28/2024, when assessing the relevant lawful basis, it is important that the business distinguishes the different stages of the processing in the context of the AI system; it is very unlikely that one lawful basis will be sufficient for all stages such as, for example, training the AI system and delivering the output.  A difficulty can arise in this instance when developing an AI system which does not have a clear known purpose – or has the potential for many purposes.  An AI system of this nature has vast potential, and the restriction of such potential by the lawfulness principle can be considered unfavourable and potentially stifling to innovation.   In this respect, it is important to constantly revisit and reassess the processing activities to ensure there is still a valid lawful basis, being mindful that the basis may change over time.

Transparency

When processing any personal data, a business is required to provide certain details regarding such processing; this information is most commonly provided through a privacy notice.  Generally, such notice must be concise, easily accessible and easy to understand, and in clear and plain language.  For a business processing personal data in an AI system, particularly one which is complex, providing notice which satisfies such criteria will be a challenge.  It requires a sufficient understanding of the AI system and the processing activities, and the ability to relay this understanding in a way the relevant individuals will understand that does not downplay or undermine the impact on their data.  To prepare such a notice, engagement with individuals in the drafting process, such as through prior consultation with test groups, can be of great benefit – allowing a business to test the balance between providing information that is detailed but possibly too complex to understand, against providing information which is easier to understand but does not accurately explain the processing activities.

Purpose Limitation

Personal data should be collected for specified, explicit and legitimate purposes, and not further processed in a manner which is incompatible with those original purposes.  This can cause an issue with regards AI systems since the significant volume and variety of data required to train an AI system can mean using data from several sources that was originally collected for distinct purposes unrelated to the AI system.  A business may cite legitimate reasons for wishing to use such data, for example to reduce the likelihood of bias in their AI system, but caution is advised and the business should consider whether this is appropriate in light of the original purpose for collecting the data.  It may be that the business is required to take additional steps before using the data, such as informing the relevant individuals and seeking their consent.  While this may require additional resources, it is likely that in many instances, the relevant data can be used and therefore businesses should not view this as an impassible hurdle to innovation.

Data Minimisation and Accuracy

Data protection and privacy laws usually require that any personal data being processed be adequate, relevant and limited to what is necessary for the relevant purpose – often referred to as the principle of “data minimisation.”  AI systems generally require large volumes of data to be able to perform effectively, particularly at the training stage.  While this requirement for large volumes of data may be considered to be at tension with the principle of data minimisation, the principle does not inhibit using large volumes of data; it inhibits the use of any data which is unnecessary.  Therefore, if a business can justify that its use of large volumes of data is necessary, this would be compliant with the data minimisation principle.  What is deemed necessary will differ based on the AI system, but a business must be able to justify its position.  Understanding the nature of the proposed data sets and the goals for processing the data will help in this respect.

In a similar vein, the personal data being processed must also be accurate. While confirming accuracy of data could be seen as a burden on the business using the data, in fact, using accurate data increases the likelihood of accurate results from the AI system.  Compliance with data protection and privacy laws in this sense may be considered beneficial to AI innovation.

Individuals Rights

Individuals are usually granted rights to their personal data if subject to data protection and privacy laws.  While the rights can differ across jurisdictions, they commonly include the right to access, the right to delete, and the right to correct personal data.  Given the nature of AI systems, full compliance with such rights can be problematic for businesses.  For example, if an individual requests their data be deleted but their data is used in the training set, deleting the data may be difficult and cause issues with the output.  However, it is not appropriate for a business to refuse to comply with a right simply because it may be difficult for them.  In order to ensure compliance when using AI systems, businesses should design an AI system with compliance with individuals’ rights in mind, implementing measures which allow the business the possibility of compliance, e.g., a mechanism for retrieving data following an access request, or a process by which personal data is anonymised.  Businesses should also be aware of any potential exemptions to compliance with rights requests they may be afforded by applicable law.

As demonstrated, while there are tensions, data protection and privacy are not intended to directly block or impede innovation.  Compliance with such requirements should be approached with flexibility in accordance with regulator and other guidance and can in fact facilitate innovation.

Author

Related Articles

Back to top button