Analytics

Securing Your AI ROI with Better Cybersecurity

By Greg Andesron, CEO/founder at DefectDojo

AI may seem ubiquitous, but we haven’t actually reached the saturation point: a January report found that only about 25% of enterprises have put AI into production. Many companies (53%) are still in the proof-of-concept or planning stages, carefully evaluating how to make the most of their investments.

It’s true that promises of increased productivity are hard to ignore, but embracing AI can come with some serious security concerns, just as data breaches are only getting more expensive. Cybercrime worldwide is set to cost $10.5 trillion this year.

So what’s an enterprise to do? On one hand, competitive edges and potentially higher profits. On the other, data breaches and damaged customer trust. No matter where you are in your AI journey, it’s time to take the following steps to make sure your AI ROI is positive.

The Current State of Affairs

Currently, AI adoption looks very different from one company to another. For example, Salesforce found that 55% of employees it surveyed used unapproved generative AI (GenAI) tools at work, and 70% of that population had neither completed nor received training on responsible usage.

This may not seem concerning, but unfortunately, many organizations and users trust AI as soon as they begin to use it. McKinsey found that just 27% of the organizations they surveyed performed human reviews of all GenAI-created content. 30% of organizations in that same survey reported that no more than 20% of their GenAI content received human review before being used.

To put it in even starker terms, GitHub and Accenture reported 96% of developers began using AI suggestions the same day that they installed GitHub’s Copilot in their development environments.

This Wild West-like landscape poses a number of major security risks for an enterprise. For example, not giving sensitive information to an LLM seems like a common-sense behavior, but Salesforce found it happens nevertheless. Each time this happens, more valuable data gets added to what’s already a treasure trove for cybercriminals. And while LLM producers are always working to improve their safeguards, a team can never be completely certain that bad actors can’t surface their data from an LLM connected to the internet.

For developers, their AI usage can leave their organization open to a side-channel attack. Bad actors can publish repositories of code that look legitimate, then mention the repository on places where LLMs scrape data, like Reddit. If a developer then downloads that repository and AI generates code with it, the bad actor can access any sensitive systems or data that the code touches. Right now, we don’t have a good way to prevent these attacks from affecting models trained on data scraped from the internet (which is most of them).

The Best Starting Point

An enterprise’s first line of defense must be an AI usage policy, and companies are beginning to recognize this. In a fall 2024 survey from Littler, 44% of C-suite executives said they had an AI usage policy, an increase of 34% from the year prior, and another 44% were either developing or considering a policy. Large enterprises on the whole were far more advanced, given their larger risk exposure, with 63% confirming they had a policy. However, in that same survey, less than half (46%) said they offered educational programs in conjunction with their policies.

Unfortunately, education and clarity are cornerstones of effective security policies. Keeping it short and simple works best:

  • This is what sensitive information is.
  • These are the tools you can use.
  • This is what you can give or input into those tools.

Furthermore, it’s worth emphasizing that following these policies can help the cybersecurity team tremendously. In the battle against cybercriminals, security professionals must cover every single weakness. On the other hand, a cybercriminal needs just one opportunity to successfully breach an organization.

Even with clear policies such as these, consider monitoring tool usage to some extent to balance trust in employees with the desire for security. Although 70% of Littler’s respondents said setting these expectations was their main strategy for compliance, it’s a first step—not the last.

Step 2: Model Security

There are several different steps an organization can take to ensure their sensitive data stays secure even as they reap the benefits of AI. Which will be the most effective depends on a number of factors, from the type of data in question to the resources at hand.

First, a company can use a third-party model, but run that model locally—without sending updates to servers outside the organization. Microsoft has taken this a step further and even created fully air-gapped GenAI models. This offers greater security, but it does remove  major ways for the model to ingest new information and updates.

Alternatively, an organization could consider building a completely proprietary model. However, most enterprises have neither the time nor the capital to make this kind of investment.

Finally, there are tools available which actively block specific services and plug leakage opportunities. Like creating a proprietary model, this is effective, but costly. It requires a high investment in technology and a high time cost as employees search for alternatives to blocked services.

The Next Steps

With no one perfect method out there, the next is often: Why not automate monitoring when there’s so much data to protect? What about developing a GenAI model trained to spot sensitive data where it shouldn’t be?

This is plausible with today’s technology, but it does not solve the main issue of trust in a model’s security. To know what an organization considers sensitive, the model will have to ingest sensitive data and policies.

Keeping AI usage secure in the future requires seeing around corners to some extent, due to the speed of development. However, starting from a solid foundation—knowing what you want to protect, keeping it simple for employees, and taking additional steps beyond employee trust—will put any organization in a much better place to respond to AI developments and data breaches alike.

Author

Related Articles

Back to top button