DataFuture of AI

Data Privacy in the Age of AI: Exploring the implications of AI on personal data usage and the need for stringent data protection measures.

By Mojisola Abi Sowemimo, Senior Data Privacy Consultant, GuidePoint Security

In the rapidly changing world of technology, innovation is moving at a fast rate, and there is an urgency for the development of privacy laws to ensure that evolving technology has the right guardrails that will help to ensure that data gathered during the development of technology involving Artificial Intelligence (AI) and usage of such technology is protected, used strictly for the purposes initially communicated and with consent.

Generative AI has enhanced innovation and productivity and is facilitating growth in all spheres of life. AI systems mostly require large sets of data and as AI technology is used across different sectors, the types of data that AI technologies use varies from personal to sensitive information, each carrying different risks when in the hands of unauthorized persons which could lead to legal consequences, bias, and data breaches. To provide a robust analysis of the topic, let us consider some questions that come to mind and discuss answers to them.

How can we ensure that privacy is considered thoroughly with generative AI systems development and usage?

Digital Trust – Attaining digital trust is a responsibility for organizations, especially those that utilize evolving technologies such as Artificial Intelligence. There is an expectation that organizations will protect the users of the technology. Protecting the users of their technology starts from the design, development of the technology to its deployment and management – throughout its lifecycle. The collection and processing of data should comply with the privacy requirements for applicable jurisdictions.

Adopting privacy principles is one of the methods that can be used to ensure that data is protected, handled, and stored. Some of the privacy principles to consider and how they may work include:

● Data minimization – When data gathered is restricted to only the data required, this reduces the amount of data that could be stolen, hereby reducing the data breach.

● Purpose limitation – This privacy principle requires that when data is gathered, the purpose for gathering the data must be fully provided and clearly communicated to individuals.

AI thrives on Data – AI requires data to function. Data is used to provide information for feeding AI models its machine learning algorithms and analytics. As a highly dependent technology on data, there are concerns on how the data used in developing this technology is gathered. Several questions come to mind when considering that AI thrives on data, and some of them include “is the individual whose data has been used to train models aware that their data is being used for this purpose? Did the concerned individual provide their consent for this purpose? Is the individual informed on how they can withdraw consent”?

We will explore the possible solutions for these questions as we proceed.

Let us discuss Digital Trust further, as part of organizations obligations to protect data that is gathered and ensure that it is gathered with consent, there are privacy tools that when employed, will help organizations be in compliance with certain privacy rules.

Regional Privacy Laws & Regulations

It is highly recommended that organizations have a robust privacy program in place as this serves as a foundation for the evolution of their business processes and technologies. Organizations that have an efficient and robust privacy program are able to enhance existing programs to meet the growing privacy requirements of their evolving business. The requirement to adhere to the privacy laws and regulations for the regions where they operate or have consumers.

Examples of such types of privacy laws in Europe is the General Data Protection Regulation (GDPR) which has certain privacy principles that when adhered to help organizations to display transparency and builds confidence for the users of their technology.

In the U.S.A., the California Consumer Privacy Act (CCPA) and other state privacy laws have created guidelines which when complied with, will also help to ensure that organizations move further towards developing digital trust. Some of the requirements of the GDPR, and some state privacy laws in the U.S.A. include Consumer Rights – e.g., The Right to Know, The Right to be Forgotten, The Right to Delete, Consent Management, Data Handling and Security, Privacy by Design and Notice Requirements.

The European Union developed the EU Artificial Intelligence Act (AI Act) to create a legal framework for developing artificial intelligence technologies. This Act categorizes AI technology based on their potential risk to individuals and society at large.

The NIST ( National Institute of Standards and Technology) Artificial Intelligence (AI) Risk Management Framework (RMF). This framework helps organizations manage risks associated with AI systems.

While the EU AI Act is a requirement for designing and using AI in the EU, the NIST AI RMF is a voluntary framework and not a requirement.

Some of the challenges with the privacy requirements in AI technology include:

Fulfilling Consumer Rights

AI technologies require data for training its AI models and identifying patterns, and for other non-traditional business purposes. As data may be used for training AI models across different data sets, data gathered may be stored across different data sets which could be challenging for data sets used in training AI technology. For example when fulfilling a consumer’s Right to Delete request, identifying the datasets within which a requester’s data resides could be quite challenging.

For requests involving the Right to Delete, or if the individual withdraws consent is their traceability to ensure that all data sets that have this data are properly identified, the data is expunged from the databases and removed from future iterations? These are just a few of the questions that come to mind when considering the source of data and the individual rights of individuals whose data is being gathered for AI development. It is crucial that organizations ensure that there are defined processes to receive and process consumer rights requests.

For example, if a consumer exercises their right for their data to be deleted, the defined process should be robust enough to have information on how data gathered has been used, provide traceability, the databases/data sets that the data resides in, expunge the data and stop its usage for future iterations.

The complexity with fulfilling this right for AI technologies is that the data could have been used to train AI models and expunging such data from datasets for future iterations could be challenging.

Transparency

It is recommended that organizations should identify the privacy requirements for its jurisdiction and ensure that its operations demonstrate transparency. The various ways that this can be demonstrated include:

● Providing information on how data could be gathered

● Informing consumers how their data may be used, how long it is used for and

● State how long the data is kept for, who has access to it or whom it is shared with

● Provide information on how consumers can opt out of their data being used.

This information could be provided on a privacy notice and made available with adequate information on how they can opt out of their data being used.

Privacy principles should be embedded in all stages of AI practices, technology development and usage to ensure that the organization is compliant with all privacy laws. An example of how this can be achieved is by having privacy at all the stages of the data lifecycle, this would be from the data collection stage, storage, processing, and deletion. It should also embed privacy operations in the design, development, and use of AI technologies.

Author

Related Articles

Back to top button