Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Machine Learning

Deep Learning: The next stage of natural selection in cybersecurity

Below is an interview that happened between Tom Allen (TA), Editor of The AI Journal, and Chuck Everette (CE), Director of Cybersecurity Advocacy at Deep Instinct.

TA: What is deep learning and how is it different to other AI security solutions? 

CE: When organisations proudly say they have an AI cybersecurity solution they are usually referring to machine learning, which works on the principle of security teams feeding the machine learning solution sorted and labeled datasets to assist in identifying patterns and links, ultimately learning the difference between malicious and benign activity.

Machine learning is a very effective security tool when it comes to dealing with known and recognised threats, however, it does have its limitations. Due to machine learning needing datasets to be manually inputted, sample sizes are often very small, leading to a significant loss of information and reliance on human interaction to build the machine learning models. This leads to a lower accuracy rate, a higher false positive rate, and the inability to deal with zero-day threats.

Deep learning, though, is an advanced subset of machine learning, which is designed to mimic the human brain. It involves the creation of a neural network that is trained on huge sets of raw data samples consisting of hundreds of millions of files and ‘learns’ to distinguish between malicious and benign code. Its independent thinking means that security teams no longer have to react to a cyberattack but instead can prevent and predict them.

Deep learning is still a fairly new concept but one which has seen increased usage over recent years. There are currently six established deep learning frameworks that have been built; with some of the biggest companies around the world, such as Google, Amazon, Netflix and Tesla using them, however, we are the only company to have built and applied a custom framework to the challenge of cybersecurity.

TA: Why are Endpoint Detection and Response (EDR) solutions no longer enough to protect organisations against cyberattacks and what should be done instead? 

CE: Many organisations have fallen into the trap of thinking that by only implementing EDR solutions, they are protected against advanced cyberattacks. However, Deep Instinct’s research found that between 2019 and 2020 there was an 800% increase in ransomware attacks and Ponemon Research indicated that 80% of successful breaches come from previously unknown malware and zero-day attacks.

If EDR tools alone were the answer to preventing ransomware and zero-day threats we would see attacks trending downward. Instead, despite billions in spending, we’re seeing them consistently rise.

For EDR solutions to work, they need attacks to execute and run before they are picked up and checked to see if they are malicious, sometimes taking minutes, hours, or even days. This is too long to wait with some of the fastest ransomware being able to encrypt files and data in under 15 seconds. By that point important customer or employee information could have been stolen, ultimately damaging an organisation’s reputation.

If organisations were to implement prevention-first solutions, such as deep learning, they would then be able to stop malware before it could write to the system or even start encrypting. Deep learning can deliver a sub-20 millisecond response time to stop malware pre-execution and before it can take hold of an organisation’s network. As a result, security teams are trying to limit the damage caused by a cyberattack, instead, they’re proactively preventing malware before it executes.

The deep learning brain can also take security teams one step further by predicting cyberattacks. By analysing 100% of the raw data samples deep learning is trained on, it ‘learns’ to recognise and predict known and unknown threats, preventing cyberattacks before they take place.

Security teams are expected to stop every single threat, knowing that only one needs to be successful to cause significant damage to an organisation. Deep learning gives security teams the power to be able to prevent all threats, including zero-day threats, before they damage the environment.

TA: How can deep learning deal with the new cyber threats we are seeing such as Adversarial AI?

CE: Threat actors are constantly developing new threats and techniques to breach an organisation’s environment, with adversarial AI as the latest, and probably the most frightening attack being experienced by organisations.

Adversarial AI manipulates the decision-making powers of solutions, such as machine learning, by tricking it into thinking that cyberattacks are benign and do not need to be stopped. Threat actors are starting to use their own machine learning solutions to assist them in architecting malicious files that another security solution thinks is harmless. This enables cyberattacks the freedom to move laterally across the network undetected, wreaking havoc on an organisation’s environment.

Deep learning, however, is fully autonomous thanks to it being trained on only huge sets of raw data, including malicious files masquerading as benign. Deep instinct’s brain can pick up the subtle differences and prevent these malicious attacks from executing.  This makes it almost impossible for a threat actor to trick the systems and they are unable to pull off successful adversarial AI attacks as they can’t manipulate deep learning in the same way they do with machine learning.

TA: Are there any other ways in which deep learning can help support security teams in their daily work life?

CE: One of the greatest challenges currently facing cybersecurity employees is the number of false positives they have to deal with on a daily basis. Research done by Deep Instinct showed that 24% of UK organisations cited the volume of false positives as being one of the biggest barriers when detecting threats present within the network. Dealing with false positives all day is extremely taxing and it’s unsurprising so many security professionals are leaving the cybersecurity industry.

Add on top of this the number of years and the amount of money individuals spend on university to gain the knowledge and expertise only to be stuck in a security position in which they only look at false positives alerts all day, it’s very demoralising. Those who feel they are not getting any worth out of their job are not likely to stay, and ultimately false positives have been a significant contributor to the mass exodus we are currently seeing in the industry.

However, with deep learning’s independent thinking capabilities, false positives are reduced to the point where alert fatigue is no longer an issue, security teams are free to work on critical activities such as threat hunting and vulnerability patching.

This changes the mindset of those within security teams, for they can proactively deal with threats instead of responding after malicious attack has already detonated. If employees know their efforts are making a positive difference to an organisation, they’re more likely to stay, ultimately reversing the mass exodus currently seen in the cybersecurity industry.

Author

  • Tom Allen

    Founder of The AI Journal. I like to write about AI and emerging technologies to inform people how they are changing our world for the better.

    View all posts

Related Articles

Back to top button