Consumers are making choices around the trade-offs between data privacy and a range of value-added services daily.
Every time they sign into a guest Wi-Fi network at their favorite café, retrieve a voucher code for a discount on the newest line of fashion, gather stats on their health and sleeping patterns, or even use social media platforms to keep up with friends and trends, data is the currency that is being exchanged.
In many ways, true privacy is being taken for granted. It is certainly difficult to achieve in the purest sense given how hyperconnected our lives have become.
The start of 2023 has intensified the debate around privacy, especially about AI and how it is rising to prominence in even greater and more advanced forms. Platforms such as ChatGPT have dominated headlines to such an extent that most people will have at least heard of it.
Indeed, the remarkable advance of generative AI has even led to calls from some tech heavyweights for the world to take stock. In March, more than 1,000 artificial intelligence experts, researchers, and backers, including the likes of Elon Musk, Emad Mostaque, and Steve Wozniak, signed an open letter calling for an immediate pause on the creation of “giant” AIs for at least six months.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter reads. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Managing the privacy risk
Privacy is, to Musk and the company, the greatest of these risks. And as AI becomes more entrenched in how people and businesses operate, the scale of the challenge will only increase. This is simply because AI systems such as ChatGPT need to feed off new data to learn. Fears over data privacy prompted the Italian government to ban ChatGPT entirely earlier this year.
While many will identify with the concerns raised, the idea of putting the genie back in the bottle is both unrealistic and, for those wanting to continue benefiting from the convenience, efficiency, and personalisation offered by AI, undesirable.
Furthermore, risks such as those relating to data privacy can be managed.
Technologies and solutions supporting privacy-preserving machine learning (PPML) already exist, and one of those is fully homomorphic encryption (FHE).
FHE enables the processing of data without decrypting it, a capability that allows organisations to offer services to customers without ever seeing their users’ data, all while maintaining the same functionality.
It doesn’t take much imagination to see how this has wide-reaching implications. Take preventative medicine. AI systems are already providing advice to people based on the data they are putting into them, such as DNA, medical history, and lifestyle habits. With FHE, all this information could be sent
without being decrypted, with the AI sending back encrypted health recommendations that only you can see. This ensures data is not leaked to third parties such as health insurers while also remaining safe from cybercriminals.
Coming back to ChatGPT, in this scenario, FHE allows users to interact without revealing anything about their conversation, which could include highly personal information.
From a B2B perspective, end-to-end encryption has the potential to greatly increase collaboration between organisations on R&D projects, especially where sensitive personal or commercial information is involved. With FHE, such collaborations could be made without parties having to sacrifice the privacy of this data.
The best of both worlds
Our mission at Zama is to achieve FHE ubiquity. In doing so, everything we do online and via AI could become encrypted end-to-end without ever compromising on user experience.
We are not far away. Currently, Zama is working with several large technology players to improve hardware and increase the speed and functionality of FHE.
It is also important to consider that this is not, in and of itself, a silver bullet solution that will allay the fears outlined in the open letter. Other data privacy tools, such as MPC and federated learning, are already used and will continue to be significant. And, even if we could solve the privacy issue overnight, there is still the intrinsic threat that AI could well replace some people’s jobs.
Meanwhile, another major obstacle to upscaling AI and machine learning lies in the quality and cleanliness of the data being used to fuel it. A key criticism of ChatGPT, for example, is not only that the dataset it draws its responses from is relatively old (from 2021), but that some of the answers it presents to subjective questions contain unconscious bias. Again, privacy is not the key problem here – it’s the source data that is being fed into AI and perpetuating errors or bias.
In this broader context, therefore, FHE represents one of a suite of privacy solutions. And it’s important to recognise that privacy itself is just one of the issues for AI to recognize and tackle to be used for everyday life securely and responsibly.