
Introduction: AI Can’t Be Trusted
We can’t trust AI models anymore, and AI hallucinations are to blame. AI hallucination refers to incorrect or misleading results that AI models generate due to insufficient training data, incorrect assumptions, or biased data used to train the model.
In 2025, over 378 million people worldwide use AI-powered tools, an increase of 64.4 million compared to 2024. The ability of AI to deliver accurate information is crucial as the number of users increases, but this is not the case.
Research conducted by OpenAI on their latest and most powerful models, o3 and o4-mini, found that o3 hallucinated 33% of the time, while o4-mini’s fault rate sits at 48%. In the same report, it is found that hallucinations are double the rate of their older o1 model. OpenAI does not know why and hypothesises that as newer models “make more claims overall”, it often leads to “more inaccurate/hallucinated claims”.
As more enterprises look to AI for business solutions, the lack of accuracy and reliability is showing that AI is not ready for mass adoption.
AI’s functions come into question when it is unable to provide factual answers. Adoption and the growth of this sector will come to a standstill if we are unable to establish confidence in AI models. This problem is not new. ChatGPT has always included a disclaimer saying, “ChatGPT can make mistakes. Check important info.” below the chatbox.
To overcome this fundamental lack of trust and enable AI’s widespread adoption, AI needs Zero-Knowledge Proofs (ZKPs), as they allow verifiable AI outputs while protecting intellectual property
The Privacy Paradox Is Keeping AI Hallucinations Around
The answer to why developers have yet to implement a solution to unverifiable AI output is the privacy paradox. It’s described as the inability to implement both security and transparency without compromising either.
For developers and enterprises, user trust is paramount for industry growth. Without verifiable trust, AI-powered platforms risk a severe lack of adoption. To make the issue worse, some examples of the failures of AI were quite public.
In 2024, Air Canada was ordered to pay damages to a passenger after its AI-powered virtual assistant provided incorrect information. More recently, in 2025, the Chicago Sun-Times and Philadelphia Inquirer took reputational hits when editions featured a list of recommended books that don’t exist due to AI-generated misinformation.
Security layers (e.g., traditional encryption, firewalls) are implemented in AI, but it is still inadequate for solving this specific challenge. Through these security layers, developers are unable to prove the accuracy of their AI models’ outputs without exposing their proprietary code or sensitive training data, creating an impossible choice between trust and intellectual property.
The Solution Already Exists: Zero-Knowledge Proofs
Zero-Knowledge Proof (ZKP), a well-established technology commonly used as a security layer in the cryptocurrency sector, provides protection that AI can benefit from. It is a cryptographic process that allows one party to prove to another that a statement is true or that a computation was performed correctly without revealing any information beyond the validity of the statement itself.
Why is ZKP the solution? ZKP solves the privacy paradox and beyond.
Unlike traditional security layers, ZKP does not require the process of sharing data. Instead, it requires proving possession of specific knowledge or satisfaction of certain conditions, much like your ID being able to verify your age without revealing any additional information beyond your birth year.
Applying ZK to AI, it can verify the correctness of a computation without disclosing the details and process of the computation. This allows businesses to provide verifiable accuracy of their AI-powered platforms to users while safeguarding intellectual property.
A specific application of ZKP, known as Zero-Knowledge Proof of Training (ZKPoT), can further enhance trust by verifying the integrity of the training process itself. ZKPoT allows developers to prove that an AI model was trained on a specific dataset or adhered to certain parameters without revealing training data. This ensures that users can trust that outputs are based on reliable training, solving AI hallucinations while preserving proprietary information.
Currently, security protocols such as homomorphic encryption exist, but when compared against ZKP, it is less efficient. ZKP does not require the process of encryption/decryption, cutting processing time and cost, making it more scalable for widespread AI deployment, especially for smaller-scale enterprises and developers in the AI sector.
Some may say that, in the same way AI requires specialised hardware, ZKP implementation does as well, which raises costs for developers. Current prices of high-performance chips for AI and ZK functions are priced around $10,000 – $12,000 per unit, for enterprise-level AI tasks.
Despite the high prices of specialised hardware, costs can be reduced through using tokenised hardware, removing the need for developers or smaller enterprises to own and maintain specialised chips. Democratising access to the technology.
It Is Time To Have Knowledge about Zero-Knowledge
Embracing Zero-Knowledge Proofs is not just a technological upgrade but a strategic move for a secure, transparent, and universally adopted AI ecosystem. Building user and stakeholder trust is crucial for the success of AI-powered platforms. Improper execution damages reputation, user experience, and trust.
Reluctance to embrace Zero-Knowledge Proofs will set the industry back, especially with AI models trained on unverified data. In order to overcome this, we must prepare for what’s next, not react to what is.
Adopting ZKP offers verifiable computation and privacy-preserving transparency, enabling more AI solutions to be built and unlocking new applications. As developers, it is our responsibility to build trustworthy systems, as the future of AI hinges on trust. Without solving this, the industry risks obsolescence.