Built by former Databricks and Apple engineers, OpenPCC enables companies to safely use large language models without exposing confidential or personal information
SAN FRANCISCO–(BUSINESS WIRE)–Confident Security today released OpenPCC, the first open-source standard that allows companies to use large language models (LLMS) without exposing sensitive data. Built by engineers from Databricks and Apple, OpenPCC ensures that AI prompts, outputs, and logs remain fully private, whether companies run models in the cloud or on their own servers.
AI usage has surged across industries, but privacy safeguards have not kept pace. As adoption accelerates, data privacy has become a critical concern. Many large language models store or learn from user input, and some even make AI chats publicly searchable. For enterprises, the risks are increasing:
- 98% of companies rely on vendors that have experienced breaches
- 78% of employees have pasted internal information into AI tools
- One in five of those cases includes personal or regulated data such as PII, PHI, or PCI
OpenPCC solves this problem by protecting data while AI models are running. It acts as a security layer between enterprise systems and AI models, preventing the leakage of confidential data and ensuring that all user information remains fully encrypted and inaccessible to unauthorized parties. OpenPCC integrates with minimal code changes, enabling clients to communicate securely with OpenPCC-compliant AI models and establish a new open standard for AI privacy.
The release includes:
- OpenPCC specification and SDKs, a standardized protocol for secure AI usage across models and providers, released under the Apache 2.0 license
- OpenPCC-compliant inference server, demonstrating how CONFSEC deploys and verifies private AI interactions in production environments, released under the FSL license
- Core privacy libraries, including Two-Way for encrypted client–AI streaming, go-nvtrust for GPU attestation, Go implementations of Binary HTTP (BHTTP) and Oblivious HTTP (OHTTP) for fully private communication between users and AI systems
Together, these components provide a practical foundation for securely deploying AI at scale.
“Companies are being pushed to adopt AI faster than they can secure it,” said Jonathan Mortensen, founder and CEO of Confident Security. “Most tools ask you to trust that data is safe. OpenPCC proves that every prompt, output, and log stays private. As AI transforms, privacy will define which companies earn trust and lead the market.”
“Enterprises have been stuck choosing between innovation and security,” said Aditya Agarwal, General Partner at South Park Commons. “What makes OpenPCC different is that it was built by engineers who understand both. By open-sourcing the framework and committing to independent governance, Confident Security is giving enterprises a standard they can finally trust to run AI safely.”
OpenPCC builds on Confident Security’s $5 million seed round from Decibel, Ex/Ante, South Park Commons, Halcyon, and SAIF. The launch aligns with the company’s broader mission to make privacy infrastructure as universal and foundational as SSL.
To ensure OpenPCC remains neutral and community-driven, Confident Security is establishing an independent foundation to steward the standard long-term, preventing future rug pulls or license changes that could limit access.
About Confident Security
Confident Security builds provably private infrastructure for AI. They’re the creators behind CONFSEC, an enterprise-grade privacy platform, and OpenPCC, an open-source standard based on Apple’s Private Cloud Compute (PCC). CONFSEC and OpenPCC are thoroughly tested, externally audited, secure, production-ready, and deployable on any cloud or on your own bare metal. Using a combination of OHTTP, blind signatures, remote attestation, TEEs, TPMs, transparency logs, and more, Confident Security provably guarantees that nobody can see the user’s prompt.
The company is led by Jonathan Mortensen, a two-time founder who has previously sold companies to BlueVoyant and Databricks. It is built by a team with deep expertise in secure systems, AI, infrastructure, and trusted computing, with backgrounds from Google, Apple, Databricks, Red Hat, and HashiCorp.
Contacts
Media Contact
Emily Lupinacci
VSC for Confident Security
[email protected]



