Future of AIAI

AI, Security, and the Stakes of Trust: A View from the Front Lines

By Neil Desai, Corporate Director, Cybersecurity Executive, Singularity University Expert

The Double-Edged Sword of Generative AI 

Technology brings incredible benefits, ranging from creating prosperity to social connectivity. However, every new device or software also creates new risks.  

Cybercriminals use the same tools we use to innovate, like AI, to cause harm and enrich themselves unscrupulously. In many cases, they’re moving faster than governments and big companies can keep up. We need to be honest about that imbalance. We need to start thinking about technology, not just what it helps us do, but also what and who it leaves vulnerable.  

We face a dual challenge: leveraging AI to bolster our defenses while mitigating the novel threats it introduces. For every advancement in defensive measures, there is a corresponding escalation in offensive tactics, leaving little room for error. Cybercriminals are incentivized to leverage productivity, creating tools to scale their attacks without adding more personnel to their own operations, which limits their own liabilities.  

Conversely, AI can serve as a force multiplier in the fight against cybercrime. It enables the analysis of vast datasets, detects anomalies in real time, and automates responses to threats, thereby enhancing the efficiency and effectiveness of cybersecurity measures. However, the same technology that empowers defenders also equips adversaries with new capabilities 

The ‘Black Box’ Problem in AI Decision-Making 

The “black box” nature of complex AI systems presents another pressing issue. As AI becomes more autonomous, its decision-making processes often become opaque, raising concerns about accountability and trust. This lack of transparency is unacceptable in critical sectors such as law enforcement, national security, and healthcare. If an AI system flags an individual as a risk or denies access to essential services, operators and stakeholders must understand the rationale. 

Accountability in AI is not merely a technical concern; it is a trust issue. Transparent AI systems enable stakeholders to comprehend, challenge, and improve decision-making processes. This transparency is crucial for ensuring fairness, accountability, and compliance with legal and ethical standards. As AI continues to influence decisions that affect people’s lives, our moral and civic responsibility is to ensure these systems are equitable, understandable, and defendable. 

Photo Credit: Tima Miroshnichenko | Pexels

When Offense Outpaces Defense 

There’s a dangerous asymmetry in how AI is developing in cybersecurity. While defenders are trying to secure sprawling digital ecosystems, threat actors already use generative AI to scale attacks with devastating speed. Because of AI, you can now instantly attack people around the world with the push of a button. That shift—from isolated, targeted attacks to instant, global threats—is a fundamental change in the nature of cyber risk. 

These capabilities aren’t confined to the virtual realm. Another major shift is from virtual disruption to physical security threats emanating from digital attacks. Cyberattacks can disrupt critical infrastructure, hospitals, and even physical safety systems. AI is accelerating the threat actor faster than it’s enabling the people trying to secure the vulnerable—and that’s a race we can’t afford to lose. 

Cybercrime Isn’t Just Economic—It’s Structural 

Cybercrime is often discussed in dollar signs, and it’s estimated that cybercrime will cost the world more than $9 trillion in 2024. But here’s what’s maddening: every phishing scheme, ransomware payout, and insurance claim actually adds to GDP. Government and the private sector’s substantial investments to counter such attacks, as well as criminal spending, circulate through the economy. So when we talk about the “cost of cybercrime,” we’re using the wrong measurement; it’s not about GDP; it’s about trust erosion, psychological damage, and the restructuring of justice itself. 

The erosion of justice for all victims of crime in the cyber-realm is one of the most underreported trends globally. Many national police services have to triage their investigations of cybercrime based on the dollar value of incidents, with some only having the capacity to investigate crimes in the millions of dollars. That’s a rounding error to a bank, but a fraction of that would be devastating to a retiree on a fixed income.  

We need to recalibrate the moral and civic frameworks through which we evaluate digital harms. Because right now, we are telling average citizens that their pain doesn’t matter unless it moves the needle on a spreadsheet. 

Ransomware Is Down. Risk Isn’t. 

Much of the recent media focus has been on ransomware, and for good reason. We’ve seen hospitals shut down, cancer treatments delayed, and critical infrastructure disabled for days. But while volume is declining, impact is rising. Today’s threat actors aren’t just stealing data; they’re targeting systems that can inflict real-world harm, like medical devices, transportation control hubs, or power grids. 

They’re not in it for ideology alone. Ransomware has become a business model, and the math is horrifyingly simple: the more physical disruption you can cause, the higher your chances of getting paid. That’s why hospitals and municipalities are high on the hit list. They’re vulnerable—and under pressure to restore services fast. 

At the same time, phishing attacks are skyrocketing and are made more convincing through AI. Deepfake audio and cloned executive voices are now used to initiate wire transfers and infiltrate systems. One attack vector involves watching for LinkedIn job changes, then impersonating IT onboarding teams to phish new employees before they’ve even memorized their login credentials. 

Make Cybersecurity Everyone’s Business 

It’s time to stop hiding cybersecurity teams in the back office. They should be one of your first calls, not your last. Waiting until a product is built or a contract is signed to bring them in is like inviting the fire marshal to inspect your house after it’s already on fire. 

Every executive today must be literate in cyber risk, just as we expect them to understand HR and legal. Cybersecurity isn’t a vertical anymore. It’s a core leadership competency. Treat it as a differentiator, not a compliance checkbox. 

One bank I worked with realized that the only way to compete on both security and innovation was to invest in helping start-ups get cyber-certified. Why? Because their customers value safety, and start-ups can’t scale without it. That’s what I mean by a cyber mindset: seeing protection as value creation. 

The rise in AI-powered phishing, ransomware, and disinformation campaigns underscores how unevenly the benefits of AI are distributed. As AI systems become easier to deploy and harder to trace, the potential for widespread harm increases, particularly for institutions without the resources to keep pace. If we fail to close that gap, we risk allowing bad actors to set the pace of innovation. 

Cybercriminals Are Scaling Faster Than Governments 

Cybercrime is evolving faster than our response systems. Generative AI has allowed criminals to scale operations with minimal cost and maximum reach. They no longer need teams—they need tools. And most of those tools are free or open-source.  

Meanwhile, platforms are overwhelmed. Large consumer-facing brands can deal with up to 20,000 fake accounts per day imitating their corporate or regional brands. Content takedown processes are manual and slow. Deepfake detection is improving, but remains a step behind. 

We’re living in a moment where offense is automated and defense is reactive and manual. Cybercriminals will always have an incentive to innovate and be ahead of law enforcement and cybersecurity professionals. However, we, businesses, and society writ large, have to close the gap if we are to reap the net benefits of innovations. 

We Can’t Win This Alone: Multilateral Action Is Essential 

One of my greatest frustrations is that there is no unified international legal framework for cybercrime. Unlike civil aviation, cyber remains a jurisdictional black hole. Some nations let criminal syndicates operate freely, as long as they don’t target domestic assets. Others are actively co-opting cybercriminals for geopolitical leverage. 

We need coalitions of the willing. Real treaties. Charges in absentia limit the movement of cybercriminals. Shared enforcement protocols and Geopolitical pressure, including targeted sanctions. The internet has no borders, and the threats are global. 

 Even successful cyber-defense efforts in the West should include development assistance to under-resourced law enforcement agencies abroad. This isn’t charity—it’s global self-defense. 

Trust Is the Currency. Lose It, and You Lose Everything. 

Most of our businesses, public institutions, and relationships run on trust. But trust, once broken, is hard to earn back. That’s why leaders must communicate cyber risk openly, not just as a list of controls on an investor report, but as an evolving, shared responsibility. 

We stand at a crossroads. AI, like all exponential technologies, is a tool—its impact depends on how we choose to wield it. We owe it to our employees and citizens, especially the most vulnerable among us, to innovate responsibly – To foresee threats and mitigate them proactively. By approaching this moment with courage, clarity, and collaboration, we can harness AI to enhance our systems’ security, strengthen our institutions, and build more resilient societies.  

Author

Related Articles

Back to top button