
Enterprises are racing to adopt AI technology, and employees are increasingly using AI chatbots to assist with everyday tasks. In many cases, companies’ AI enthusiasm has outpaced proper privacy and security planning; teams are pasting source code into chatbots, uploading customer records for AI analysis, and letting third-party services sit in the middle of sensitive workflows.
For organizations that handle confidential client data, the ways most AI tools operate, in their default form (centralized, cloud-hosted, and data-hungry), creates risks that are hard to justify. Privacy-centric organizations don’t need to eschew AI entirely, but they do need to install proper safeguards and ensure AI is leveraged responsibly to avoid exposing sensitive data.
Default AI Setups Are Risky
Most AI services don’t run on your laptop, they run on the provider’s servers. Anything you paste (prompts, code, files) gets sent to systems you don’t control, and may be stored for later use. The vendor decides how long that data is kept, how it’s backed up, and who inside their company can see it. Once sensitive material leaves your network, you’re relying on someone else’s security and promises.
This goes against basic data hygiene. Companies’ source code, customer data, unreleased features, and internal findings should never be shared externally.
Companies need to treat AI like any other third-party service. They should decide which types of data are never allowed to leave their network, and block uploads of those categories. For many B2B enterprises, this likely includes client data (which companies are often contractually obligated to protect). For instance, at Oak Security, we never share our clients’ data with any external AI services such as OpenAI, however useful those tools might be at code review.
Every company should develop a policy for how they use AI, and when in doubt, should treat external AI services just like any other third party vendor. Logs and metadata tend to stick around, so plan as if your AI conversations could be audited or disclosed later.
On-Device AI Is More Practical Than You Think
AI models don’t have to run on a massive server farm to be capable. Modern laptops and servers can run language models locally for many everyday tasks. At Oak Security, our auditors use open-source models run on everyday MacBooks—Apple Silicon is surprisingly proficient at running modern LLMs.
Running AI models locally is relatively inexpensive (all it takes is a modern laptop, which most employees already have), and allows you to take advantage of AI tools without the risk of exposing sensitive data.
An added bonus is that switching to locally run AI models grants companies greater operational control, frees them from relying on AI vendors. Work doesn’t grind to a halt when OpenAI suffers an outage, and employees don’t need to suffer through confusing or poorly managed updates. You can stick with what works, and update to a new open-source model on your own schedule. And when OpenAI hikes its prices, it won’t impact your bottom line.
The Danger of Black-Box Models
Many commercial AI platforms are opaque. You can’t see what data they were trained on, how they handle prompts internally, or how long they really keep your inputs. For teams operating in especially adversarial environments, that opacity creates a risk.
Switching to open-source models gives companies more transparency. You can actually read the code, and audit it; you can test new models for prompt-injection issues and other failure modes before rolling them out; and you can fine-tune the code yourself, limiting access to certain files, or prohibiting certain behaviors.
Over time, the AI model that you hone and train internally grows into a company asset; instead of a line-item denoting subscription fees forked over to OpenAI each month, AI starts denoting value on your balance sheet.
Leveraging AI Doesn’t Have to Mean Compromising On Privacy
It’s easy to assume that AI tools are the same as a Google search; Google is a third party service, after all, and we don’t expect companies to censor their searches lest Google glean sensitive insight. But that assumption is dangerous; AI tools harvest far more information, and they aren’t “grandfathered in” in the same way as search engines (it’s reasonable to assume that a service provider is taking steps to avoid leaking customer data to Anthropic).
Keeping sensitive data secure does not mean eschewing AI tech entirely. With the right locally-run set up, companies can harness the benefits of AI while maintaining operational security and client trust.
If enough organizations adopt a stance of being “unwilling to compromise on privacy”, the leading AI tools will be forced to offer more privacy-centric solutions to compete. Until then, companies will need to take a more hands-on approach to AI to keep sensitive data secure.


