Cyber Security

AI & Cybersecurity – how prepared should we be?

Policymakers, business executives and cybersecurity professionals are all feeling the pressure to adopt AI within their operations. With this, comes the threat of generative AI adoption outpacing the industry’s ability to understand the security risks that these new capabilities will introduce.

For IBM X-Force, IBM Consulting’s security services arm, the expectation is that a universal AI attack surface will materialize once AI adoption reaches a critical mass. This will force organisations to prioritise security defenses that can adapt to these threats at scale.

For the attackers, their best tool for compromising these networks may well be generative AI too, which is already emerging as a supplementary tool in the cyberattacker arsenal.

Despite these looming generative AI-enabled threats, however, X-Force hasn’t observed any concrete evidence of such cyberattacks being used directly to date or a rapid shift in attackers’ goals and objectives from previous years.

The risk remains, however, of other methods of cyberattack being enhanced by AI. In 2023, the IBM X-Force team discovered that many cybercriminals could wreak havoc on corporate networks by simply logging in through valid accounts—and as bad actors begin investing in AI to help them identify priority targets, this problem is only expected to worsen.

A Growing Identity Crisis

According to IBM’s 2024 X-Force Threat Intelligence Index report, cyberattacks caused by exploited user identities rose by 71% in the past year globally and represented 50% of all security incidents in the UK.

Cybercriminals are increasingly seeking the path of least resistance to get through organisations’ security measures. Chief among these inroads is the practice of exploiting valid accounts, which enables attackers to bypass initial security checks by simply logging in to an organisation’s network.

Given the ease and effectiveness of these attacks, criminal operations to gain access to users’ identities have risen sharply over the past year. In addition to accessing compromised credentials from the Dark Web, attackers are innovating and investing in infostealing malware, designed to obtain personally identifiable information like emails, social media and messaging app credentials, and banking details. In 2023, X-Force witnessed a 266% rise in this type of malware.

What Now?

With at-scale attacks harnessing generative AI looming on the horizon, it’s never been more critical for organisations to carefully examine their networks and user access structure to ensure they’re operating with sound security fundamentals.

Just as businesses seek to leverage generative AI to summarise and prioritise data, cybercriminals may turn to it for data distillation: putting AI to work with the troves of compromised data they’ve collected to identify the best targets for an attack. The interest is there—in 2023 alone, X-Force observed more than 800,000 posts about AI and GPT on Dark Web forums.

While these threats are poised to worsen as cybercriminals continue to innovate ways to expedite their attacks or improve their stealth, it’s not a problem without a solution. There are actions organisations can take to better safeguard their networks from identity-based attacks.

  1. Test & Stress Test: Organisations should frequently stress test environments for potential exposures and develop incident response plans for when—not if—a security breach occurs. The stress tests that X-Force conducted in 2023 for clients revealed that identification and authentication failures (e.g. weak password policies) were the second-most observed security risk.
  • Leverage Intuitive Tools: When it comes to securing users’ access to networks, not only is it important to ensure a users are who they say they are, but they need to act like it too. It’s paramount in today’s environment to leverage behavioral analytics and biometrics as a form of verification. Habits, typing speed, and keystrokes are just a few examples of behavioural analytics that can verify a unique user is legitimate. AI-enabled tools can help detect and block anomalous behaviors before they achieve impact.
  • Enforce Multi-Factor Authentication (MFA) for Users: Organisations can strengthen their credential management practices to protect system or domain credentials by implementing MFA and strong password policies to include the use of passkeys and leverage hardened system configurations that make accessing credentials more difficult.

What Next?

For those in the process of exploring AI and defining their AI strategies, it’s important to consider that securing AI is broader than AI itself. Organisations can leverage existing guardrails to help secure the AI pipeline. The key tenets to focus on are securing the AI underlying training data, the models, and the use and inferencing of the models, but also the broader infrastructure surrounding the models.

The same access points that cybercriminals leverage to compromise enterprises pose the same risk to AI. And as organisations offload operational business processes to AI, they also need to establish governance and make operational guardrails central to their AI strategy.

Related Articles

Back to top button