Large language models are now in the workflow of almost every team, but most companies still don’t have a simple way to keep prompts, outputs, and logs truly private.
Enter: Confident Security, a San Francisco startup founded by ex-Databricks engineers that today released OpenPCC, an open-source standard that allows enterprises to use LLMs without exposing confidential or personal data.
“Companies are being pushed to adopt AI faster than they can secure it,” said Jonathan Mortensen, founder and CEO of Confident Security. “Most tools ask you to trust that data is safe. OpenPCC proves that every prompt, output, and log stays private.”
What OpenPCC is
OpenPCC is a security layer that sits between enterprise systems and AI models. It wraps every interaction, prompt in, tokens out, logs and telemetry, in encryption and policy, so the model can compute on data without anyone (vendor, operator, or insider) being able to read it in the clear.
The launch includes three core pieces:
- OpenPCC spec and SDKs (Apache 2.0): a protocol teams can implement across models and providers.
- An OpenPCC-compliant inference server (FSL license): a reference deployment that shows how Confident Security (“CONFSEC”) runs and verifies private AI in production.
- Privacy libraries:
- Two-Way for encrypted client↔AI streaming,
- go-nvtrust for GPU attestation,
- Go implementations of Binary HTTP (BHTTP) and Oblivious HTTP (OHTTP) for private, unlinkable communication.
The point is practicality. Minimal code changes. Drop-in support for clouds or your own racks. A standard any model provider can adopt.
Why this matters
AI adoption is outpacing privacy controls. The risks are not abstract:
- 78% of employees say they’ve pasted internal information into AI tools.
- One in five of those cases includes regulated data (PII, PHI, PCI).
- 98% of companies rely on vendors that have experienced breaches.
That’s a bad mix: more sensitive input going into systems that may store it, learn from it, or expose it in logs. Some products have even made AI chats publicly searchable. Enterprises need a standard way to block data leakage without blocking AI itself.
Confident Security says it will set up an independent foundation to steward OpenPCC. The goal is neutrality and stability: no sudden license shifts, no vendor lock-in, and a clear path for multiple model providers to interoperate.
“Enterprises have been stuck choosing between innovation and security,” said Aditya Agarwal, General Partner at South Park Commons. “What makes OpenPCC different is that it was built by engineers who understand both. By open-sourcing the framework and committing to independent governance, Confident Security is giving enterprises a standard they can finally trust to run AI safely.”
Where this fits in the AI stack
OpenPCC follows a broader trend: privacy at the infrastructure layer, not bolted on at the app tier. Apple popularized the concept with Private Cloud Compute (PCC) for consumer devices. Confident Security positions OpenPCC as the enterprise-grade, open-source shared standard any provider can adopt.
Security teams need to approve AI projects without stalling them. If OpenPCC works as described, it can lower the approval burden by giving CISOs and legal teams verifiable controls and transparent logs. It also gives procurement a cleaner story: choose any OpenPCC-compliant model or host, switch later if needed, and keep the same privacy guarantees.
The open licenses matter. Apache 2.0 for the spec and SDKs invites broad adoption. The reference server under FSL gives enterprises something production-ready to start from while keeping the governance open.
Who’s behind it
Confident Security raised $5 million in seed funding from Decibel, Ex/Ante, South Park Commons, Halcyon, and SAIF. Mortensen is a two-time founder with prior exits to BlueVoyant and Databricks. The team’s background spans Google, Apple, Databricks, Red Hat, and HashiCorp, with depth in trusted computing, secure systems, and large-scale infrastructure.
The pitch is straightforward: keep using the models you want, but make privacy the default. If OpenPCC becomes the common language for that, it could do for AI privacy what SSL/TLS did for web traffic by turning “secure by default” from a promise into a baseline.
As Mortensen puts it: “As AI transforms, privacy will define which companies earn trust and lead the market.”
Learn more at https://confident.security/.



