The GDPR has now been in force for five years and the European Commission wants to improve the regulation in the first half of the year. In concrete terms, binding deadlines for forwarding complaints and a general processing deadline for complete complaint procedures are to be introduced. The AI will regulate the EU separately in the “AI Act”, with GDPR and AI being closely intertwined, as the ban in Italy shows. Google decided to block access to his Bard AI chatbot available in Europe because of GDPR. But AI also has the power to make the use of AI safer.
ChatGPT has made AI freely available to everyone and brought it into everyday life. This January, more than 100 million active users are said to have used the AI – two months after its launch. This makes ChatGPT the fastest-growing consumer application in history, according to Reuters. And with every new version, this AI gets bigger and better. And with every new version, it raises more questions – legal, technical, and ethical, because there is a lack of transparency, no one from the outside can look into this black box.
The Italian government has decided to ban ChatGPT. According to the Italian public prosecutor, the AI violates the principles of the GDPR. The European states and their data protection authorities see it as their duty to take AI into account. It fits the picture that the facial recognition company Clearview AI was fined three times in the past year up to 20 million euros. Privacy advocates in the UK, Italy, and Greece felt this company and its AI was violating citizens’ rights.
Google decided to ignore the EU country in the rollout of their Bard AI chatbot. Neowins reports that without clearly acknowledging its absence from the EU, Google says it will “expand to more countries and territories in a way that is consistent with local regulations” on a support page.
The overall picture is blurred, but there are clear tendencies that the use of AI with certain data creates legal risks when handling GDPR-relevant data. And the statistics show that the authorities in Europe continue to impose heavy penalties – especially against prominent American companies in the technology sector. From May 2022 to April this year, fines totaling 1.1 billion euros were levied – nine of the ten highest penalties were imposed against US-based organisations. And this week Meta has been fined €1.2bn ($1.3m) by EU regulators for violating GDPR.
The EU wants to regulate the legal situation more clearly in the AI Act. The Europe-wide AI law cleared the first hurdle on May 11 and is scheduled to be passed in the plenum in mid-June. It will take until 2024 for the law to actually come into force. And only much later does it become clear in the first cases how it actually works in practice. What is certain is that companies and their employees will face new tasks and obligations from a compliance perspective.
Be there – with controlled risk
Nobody in the free economy can afford to wait until then. Companies and private individuals now need clear orientation. Because they want to use the great potential of this technology, the first companies are already doing it. There are four clear recommendations on how companies can approach this without causing legal risks and still not getting in the way of users. And at the same time to be positioned in such a way that the AI Act can be fully implemented without turning IT upside down:
Always think about compliance
Whether the use of AI affects compliance simply depends on the application scenario and the data used. Anyone who wants to use AI in compliance with the GDPR should seek the advice of a data protection expert before introducing it.
Know data
Companies and their employees need to know exactly what data they are feeding the AI with and what value this data has for the company. Some AI providers deliberately delegate this decision to the data owner because they know the data best.
Understand the content of the data
In order for data owners to be able to make the right decisions, the value and content of the data must be clear. In everyday life, this task is gigantic and most companies have piled up mountains of information that they know nothing about. AI and machine learning can help massively in this area and defuse one of the most complex problems by automatically classifying company data. Predefined filters immediately fish compliance-relevant data such as credit cards or other personal details out of the data pool and mark them. Once loose on the data, this AI develops a company-related language, a company dialect. And the longer she works and the more company data she examines, the more accurate her results become.
The charm of this AI-driven classification is particularly evident when new requirements have to be met. Whatever the AI Act brings, ML and AI-powered classification will be able to search for those additional attributes and give the company a piece of security for the future.
Control data flows
If the data is classified and categorised with the right characteristics, the underlying data management platform can automatically enforce rules without the data owner having to intervene. This reduces the chances of human error and the risks. For example, a company could enforce that certain data such as intellectual property or financial data may never be passed on to other storage locations or external AI modules. Modern data management platforms control access to this data by automatically encrypting it and requiring users to authorise themselves using access controls and multi-factor authentication.