Cyber Security

Artificial Intelligence in an age of cyber-uncertainty: What to look for?

By Rene-Sylvain Bedard, author of Secure by Design

With the explosion of AI models and popularity, the trend is also rapidly becoming a goldmine for cybercriminals.

LLM and Social acceptability

I consider LLMs to be the appetizer of AI. I strongly believe that they were created to gain social acceptability. Simply to ensure that AI would be adopted by the mass. Imagine if in 2022, they had presented a General Artificial Intelligence with an IQ of 1500, what the reaction would have been? Panic and hysteria. And yet, today, a mere three years later, we have conferences that explain the benefits of a Super AI.

Social Acceptability.

The black box issue

At its heart, the primary cybersecurity issue I see is the highly complex black boxes that are AI models. Since AI has been trending, marketplaces of models have been multiplying. Those models are tested but cannot be fully explained.

How many of those models might have been poisoned or created with biases? What kind of issues can they generate in the future, once they are fully integrated into the enterprise?

We are easily amazed at the fact that a certain model is answering properly to very complex questions or providing us with even better answers than a specialist. In this lies an actual risk. We need to look beneath and ask for explainability. We cannot blindly accept something that might have darker designs built in, without exposing our private data and business to cybercriminals.

Flawed humans, data and models

As we are flawed, it is only normal that our creation is also, somewhat, flawed. We have seen over the past decade multiple efforts to put boundaries to ensure that our models do not reflect some of our darkest flaws, such as bias or racism.

What happens when cybercriminals may want to use these flaws to their advantage, perhaps even inject poison into the veins of the model? Not only could it derail your project, but if properly implemented, this could mean that your data is now accessible, or sent, to a third party, who now would have access to your intelligence, your intellectual properties.

Then there are the extremes. Even without talking about cybercriminals, we can see some right-wing movements that are currently undoing years of thoughtful, scientific approaches, and replacing them with unregulated approaches, that are opening the door to nightmarish scenarios.

History has never been kind to those who use power without wisdom.

Privacy in a world of generated content

The use of public AI is also plaguing enterprises today, without even having implemented any internally. The absence of governance and actual directions to employees are opening the doors to numerous data exfiltration that can most likely never be recovered.

I have seen webinars where AI specialists demonstrated the wonders that can be achieved by adding your yearly financial statement in a GPT. I could hear my inner self scream “DON’T!!” and yet, it is happening throughout the world, and users do not know.

What happens once enough of your corporate knowledge and intelligence is fed in a public model? Where is your uniqueness, your competitive edge? It is now published for anyone to use.

Can you sue OpenAI to remove your private data from his model? It was put there willingly by one of your employees.

One of the greatest marketing campaigns in Canada has been around the Caramilk chocolate bar, asking “What is the secret of the Caramilk?”. And just for fun, if you ask any GPT, it will tell you what this secret is.

Understanding AI from a cybersecurity standpoint

From a cybersecurity and GRC (Governance, Risk and Compliance) standpoint, cybersecurity is not good news. It is highly complex, easy to use but mostly, very easy to misuse and, it is hard to restrain and control. The perfect storm.

Furthermore, we are aware that cybercriminals are using it also for their own benefits. This includes:

  • Improved phishing attacks
  • Better tools and malware
  • Accelerated methods and processes
  • Increased rates of infection
  • Reduced time to market for their complex successful attacks

So how do you implement concepts like Zero Trust in AI? Cybersecurity has to be implicated from the ground up, it is the only way to secure every layer of the process.

Here is an example:

  • A model is chosen for your next AI agent, has it been tested for security? Is it safe?
  • You want to train the said model to your data. Is the data secure and is it monitored to ensure no bad actor can manipulate or infiltrate it?
  • Your executives want to test this new AI agent. Is their environment, such as the devices and identifiers they are using safe from prying eyes and monitored against threats?

And I can keep going for every step of the way.

Cybersecurity must become a second nature. It cannot be an afterthought, and it must be asked every step of the way.

AI in the enterprise: Yes, but at what cost?

When customers ask me: Can we implement AI to support our sales or our executives? The answer will always be yes, but please, in a structured way. You do not want your trade secrets or your payroll to be accessible by the youngest intern or by a cybercriminal who’s just been lurking in a dark corner.

You need your data to be properly secured before you expose it to an AI model, and you need to be sure how that security will be integrated into those answers. Not all models are born equal and not all of them were planned with cybersecurity in mind.

And even if you are deploying Microsoft Copilot within your own tenant, and the model is read-only and it has been architected so that your data is yours, it still requires cybersecurity. If you have all of your company’s information in a model, and someone takes over your credentials, it is over. They have access to all of your company’s information. And yes, they can exfiltrate it to the dark web.

Let’s make sure your fences are properly set, and your perimeter secure before letting the horses run wild and free.

AI as a tool to secure the enterprise

Then there is the other side of the coin. The side where we, as cybersecurity professionals, are using AI to solve cybercrimes. It is helping us daily. How?

  • Finding the needle in a haystack
  • Reducing the time to analyze logs
  • Increasing the efficiency of research to discover threats
  • Recognizing behaviour patterns

And so on.

With the new improved tools, we can finally see the gap starting to close. Which means that our cyberdefenders are now more efficient and can reduce the time to detect and improve reaction time. This allows us to end more cyberattacks before they begin, increasing corporate cybersecurity as a whole.

Finally, the planetary cost of AI

It is known; the current models are power hungry. They are, for lack of a better word, fat and are requesting tons of compute power. This is also a major risk when deploying and using AI. How much will this increase your carbon footprint? Are you looking at hybrid AI to reduce your power consumption?

Let’s answer those questions that popped up in order. Carbon footprint of AI? Yes, one of the largest and most unrecognized security concerns of AI is the impact on the planet. Our home. Current energy consumption, for Microsoft datacenters alone, have forced the reopening of a nuclear power plant in the United States to answer to demand. So, my question to you is: what is your AI carbon footprint?

Second question: hybrid AI? What’s that? That is when, instead of solely answering questions through Machine Learning (ML) we also include the second aspect of AI known as Operational Research (OR) into the mix. The best analogy I could come up with is as follows: with ML you provide the machine with a billion cat pictures to have it learn to recognize a cat. In Hybrid AI, you provide it with 30,000 pictures instead, but also, everything we know about cats, what are all the rules that we know that makes a cat a cat. From the various experts I have talked to, hybrid is not only much faster to train, but also more efficient, with a much-decreased carbon footprint.

Conclusion

I truly hope that AI can be the copilot and the support we need as a society. I am also hopeful that it will support our thriving for a future free of cybercriminals.

To you: business owners and deciders I say this, be mindful. Aware even. This new shining object is not only a diamond that can propel your business to the next level. It can also cut you and leave you bleeding out. Be prudent how you let it run free in your businesses.

On a more positive note, it is also helping us, your cyberdefenders, making the world a safer place, so use us, to ensure that we, together, can also make your businesses, a safer place.

Author

Related Articles

Back to top button