Future of AIAI

Verifiable encrypted compute is the secret to privacy-first agents

By Lukas Helminger, Co-founder and CEO, TACEO

AI’s hype has not been without controversy – both legal and ethical. The thirst for more and more data has led AI providers to scrape organisations’ owned content without their consent. One example is when The Times sued Open AI and Microsoft in 2023 for training models with its content without permission. More recently, Anthropic agreed to pay out $1.5bn, in what could be the largest publicly-reported copyright recovery in history, to authors whose pirated work was used to train the firm’s Claude chatbot. 

Privacy concerns are another result of Big Tech’s mission creep. In August 2025, it was revealed that contractors training Meta’s AI chatbots have access to all chats between users and the bots – including sensitive information like names and addresses.  

However, restricting AI’s access to information comes with its own flaws. AI is only as reliable as the data to which it has access, so limiting its reach inevitably leads to a higher rate of errors and hallucinations. There have been countless examples of this. For instance, in 2022, Air Canada’s chatbot promised a discount to a customer that wasn’t actually available. Despite claiming that the fault was not its responsibility, the mistake resulted in the airline having to pay damages in what could potentially be a landmark ruling for companies relying on AI for customer service. 

For AI agents, getting this balance just right is vital, as the more personal data they are trusted with, the more valuable they can be. Take, for example, personal assistants like Alexa. The greater level of information it can access about you, such as bank statements, location, subscriptions, private chats, journal entries and healthcare data, the more personalised and relevant the responses it can provide. Yet, revealing this level of detail to an AI assistant would understandably make you nervous as you do not know who might see it.  

The only way to prevent the current trade-off between privacy and accuracy is to fully encrypt sensitive data before AI agents can access it. With the correct guardrails and cryptographic techniques, companies have the potential to ensure that AI agents are given all the data they need to be useful, without compromising on privacy or security. 

Solving the privacy-accuracy trade-off 

Using agents in any capacity involves sharing some level of information. However, who gets to see this and what level of detail they are privy to should always be tightly controlled – they must be able to keep some details entirely private, even from the AI providers powering the agent. 

The first step towards this is enabling encrypted computation that can be independently verified. By using advanced cryptographic techniques, data can remain permanently encrypted while also allowing multiple parties to prove specific properties about it. 

This creates what’s known as a ‘Private Shared State’, a cryptographic data structure that allows multiple parties to compute and update shared information without revealing their individual inputs. It’s made possible by combining two cryptographic techniques: Multiparty Computation (MPC) and Zero-Knowledge Proofs (ZKP). MPC allows multiple participants to compute together on encrypted data “shares” without ever revealing their individual inputs. Meanwhile, ZKP enables a participant to prove something is true – for instance, verifying a transaction – without disclosing the underlying data or relying on a central authority.  

When combined with secure guardrails to control what information agents can provide in outcomes and answers, knowledge could exist in a ‘quantum state’ of being forever hidden yet available to validate against. For example, Alexa could compute on your personal details to optimise your calendar, while the user remains confident that Mr Bezos will never be able to access those details – because the data the device has access to remains permanently encrypted. 

And the use cases don’t stop with just AI agents. Banks could use verifiable encrypted compute to perform more accurate credit scoring or money laundering investigations – screening customers’ spending data without seeing their personal information. Or a group of hospitals could use the technology to prove that a clinical trial meets safety standards without revealing the results of the trial or the identities of participants.  

In each case, the Private Shared State would allow multiple parties or agents to collaborate on, and verify information without sensitive data being made public – enabling secure business-to-business collaboration as well as protecting consumers’ privacy. The potential applications are only beginning to emerge. 

Preparing for a privacy-first future  

The future of AI depends on being able to strike the balance between privacy and accuracy, without sacrificing one for the other. With recent advances in the field of cryptography, that trade-off no longer needs to exist. 

Verifiable encrypted compute offers a new foundation: one where AI systems can operate on private information without ever revealing it. Paired with the right safeguards, this approach enables agents that are not only smarter and more capable, but also trustworthy by design. 

If we want AI that works for everyone for the long-term, privacy has to be built in from the start. The tools are here – now it’s time to use them. 

Author

Related Articles

Back to top button