Analytics

Zero Trust Security is a must for organisations looking to use Generative AI safely 

It’s impossible to ignore the impact of generative AI since it burst onto the scene at the tail end of last year. Some people have jumped on the technology as a workplace silver bullet heralding a new age when they’ll never have to face the drudgery of writing an email or report ever again. 

While for others, it’s the beginning of a new wave of technology that looks set to bring about untold benefits in every business sector, from logistics to the development of new life-saving drugs. 

But the initial lustre of this game-changing technology — hailed as a significant step forward in personal productivity — is also raising some concerns, not least in terms of data privacy and security. 

Earlier this year, electronics giant Samsung banned the use of generative AI tools after reports that Samsung employees had accidentally shared confidential information while using ChatGPT for help at work. 

In an email sent to staff seen by Bloomberg and widely reported at the time, the Korean company said: “Interest in generative AI platforms such as ChatGPT has been growing internally and externally. 

“While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

Samsung is not alone. A number of companies — and some countries — have banned generative AI. And it’s easy to understand why. 

Generative AI poses security threats 

In effect, using tools such as ChatGPT and other large language models (LLMs) is essentially opening the door to unmonitored shadow IT — devices, software and services outside the ownership or control of IT organisations.

And the problem is simple. Whether it’s an employee experimenting with AI — or a company initiative — once proprietary data is exposed to AI, there is no way to reverse it.

Make no mistake. AI holds incredible promise. But without proper guardrails,  it poses significant risks for businesses and organisations. 

According to a recent KPMG survey, executives expect generative AI to have an enormous impact on business, but most say they are unprepared for immediate adoption. And top of the list of concerns are cyber security (81%) and data privacy (78%).  

That’s why chief information security officers (CISOs) and chief information officers (CIOs) need to strike a balance between enabling transformative innovation through AI, while still maintaining compliance with data privacy regulations.

And the best approach to do this is to implement Zero Trust security controls, so enterprises can safely and securely use the latest generative AI tools without putting intellectual property and customer data at risk. 

What is Zero Trust security?

Zero Trust security is a methodology that requires strict identity verification for every person and device trying to access resources across the network. Unlike a traditional ‘castle and moat’ approach, a Zero Trust architecture trusts no one and nothing. 

And it is this approach that is essential for any organisation looking to use AI. Why? Because Zero Trust security controls enable enterprises to safely and securely use the latest generative AI tools without putting intellectual property and customer data at risk.

Speaking at the launch of the tenth London Tech Week recently, Prime Minister Rishi Sunak argued that as the “tectonic plates of technology are shifting” the UK must harness innovation if it wants to become the best place for tech businesses to invest and grow.

The PM also announced that the UK is to host the first global summit on AI in the autumn to “agree safety measures to evaluate and monitor the most significant risks from AI.”

That’s all well and good. But in the meantime, organisations using generative AI need to ensure that their systems are robust enough to prevent any security issues.  

Taking steps now to protect and secure your data

For instance, it means understanding how many employees are experimenting with AI services – and what they’re using it for. It also means giving system administrators oversight — and control — of this activity just in case they need to pull the plug at any time. 

The adoption of a Data Loss Prevention (DLP) service would help to provide a safeguard to close the human gap in how employees may share data. While more granular rules can even allow select users to experiment with projects containing sensitive data, with stronger limits on the majority of teams and employees.

In other words, if organisations are to use AI in all its guises they need to improve their security and adopt a Zero Trust approach. 

And while it’s important to highlight the issue, there is no need to sensationalise concerns around a technology that has the potential to offer so much. 

After all, with every transformative step forward in technology, from mobile phones to cloud computing, there are new security threats that rise to the surface. And each time, the industry has responded to tighten security, protocol and processes. The same will happen with AI.

Author

  • John Engates

    John Engates joined Cloudflare in September of 2021 as Field Chief Technology Officer and is responsible for leading the Field CTO organization globally. Prior to Cloudflare, John was Client CTO at NTT Global Networks and Global CTO at Rackspace Technology, Inc. Earlier in his career, John helped launch one of the first Internet service providers in his hometown of San Antonio, Texas.

    View all posts

Related Articles

Back to top button