Ethics

AI vs AI: The Practical and Ethical Considerations

By James McQuiggan, Security Awareness Advocate at KnowBe4

Artificial Intelligence (AI) continues to evolve, and the cybersecurity landscape for defenders and attackers continues the cat-and-mouse game between them. Organisations use AI to detect anomalies, automate security responses, and analyse real-time threat data. However, cybercriminals are leveraging AI to refine their tactics, making phishing emails more convincing, automating social engineering, and even creating malware that adapts to evade detection. The result is an AI vs AI arms race, where both sides continually refine their tools.

This race is exemplified in well-documented issues like spear phishing and automated social engineering, which is explored in more detail below. Organisations must recognise that AI, like any technology, is not a foolproof solution—it is a tool that must be managed, monitored, and continuously assessed. The same technology that strengthens security can also be used against it. The challenge is not just keeping up with AI-driven attacks but staying ahead with advanced security measures, continuous education, and responsible implementation.

Below, we will look at some real-world examples of the conflict between AI automation and overreliance.

Smart Contracts: When AI Closes Deals

Smart contracts represent one of the best examples of AI capabilities. They allow two parties to engage in agreements, facilitating seamless transactions without needing a notary. When certain conditions are met, these contracts automatically execute.

Smart contracts have found their way into industries like music and real estate. Next time your favorite artist releases a track onto a streaming service, their royalties might now be automatically distributed thanks to smart contracts.

These digital agreements can calculate and distribute royalties based on pre-defined parameters, effectively eliminating the middleman.

We have also seen fractional ownership in real estate. Imagine buying a token that grants you a share in a property. It is like having a slice of the proverbial pie without the messy kitchen cleanup. Think of it as property investment with a side of fancy algorithms.

It is worth noting that while smart contracts sound fantastic, they are not without their challenges. Bugs in the code can lead to breakdowns in contract execution, resulting in losses. As with any smart device, the cautionary tale of “garbage in, garbage out” applies here too – if the input data is flawed, these contracts will merrily execute, leading to chaos. 

Hiring processes: The AI Behind the CV

AI can optimise the hiring process by automating time-consuming tasks, offering benefits to both recruiters and candidates. However, careful implementation is essential to avoid potential pitfalls. For example, recruitment agencies are using AI to sift through heaps of CVs, often rejecting excellent candidates who might simply lack a keyword in their documents.

According to a KnowBe4 survey of 1,001 HR professionals in organisations in the UK with over 100 employees, 66% of HR professionals identified CVs generated by AI, with a significant portion being fraudulent. On the other hand, 36% of HR teams are already utilising AI tools to streamline the screening of job applications, while 29% employ AI technology to craft job specifications.

As a result, candidates have turned to AI to craft perfectly tailored CVs, leveraging LLMs to help them outsmart the very systems designed to evaluate them. The irony? The best AI-generated CV doesn’t necessarily belong to the most qualified candidate; it is the person who can manipulate AI into crafting the shiniest story. We have shifted from evaluating talent to evaluating AI interpretative skills while standardising candidate quality. A laughable predicament.

This scenario feeds into a cyclical battle. Similar to AI savvy job seekers, criminals are also taking advantage of LLMs, but for much more nefarious means… devising even more sophisticated phishing attacks.

It is not just the criminals getting crafty, though. With the same tools, so too are legitimate organisations arming themselves against the growing tide of AI-generated mischief. AI vs AI is turning into a digital arms race that is as entertaining as it is alarming. 

AI and Spear Phishing

One of the clearest examples of the AI vs AI arms race is spear phishing. Generative AI-powered large language models (LLMs) allow attackers to generate phishing emails that lack the tell-tale signs of past scams such as poor grammar and spelling. This makes it easier for cybercriminals to impersonate executives, trick employees into revealing credentials, or even manipulate users to click on links. On the defensive side, AI-driven security tools analyse email patterns, network feeds, and other security tools to identify anomalies and flag suspicious events before they become dangerous. Having this capability has certainly leveled up their attacks as the industry has seen an increase of almost 30% since 2023.

With phishing and social engineering attacks, also comes the focus of spear phishing, where cybercriminals target their attacks towards certain groups or individuals. Conducting spear phishing campaigns requires reconnaissance of the target or group and using Open-Source Intelligence (OSINT) tools and tactics.

Ethical Considerations in AI Interactions

What happens when AI interacts without human intervention? It’s similar to giving admin rights to every user on a network and hoping for the best. Accountability starts becoming murky when perfectly crafted algorithms make decisions with real-world consequences. Who foots the bill when things go wrong: the programmers, the AI vendors, or the machines themselves?

There’s a thin line between enhancing workflow and creating chaos. Where do we draw the line for responsible AI deployment in situations involving human lives or livelihoods? As AI continues to take charge in various domains, creating frameworks for ethical AI use is absolutely paramount.

Uploading unauthorised content to platforms like ChatGPT or Gemini, such as lecture slides, medical records, organisation strategy plans or firmware code raises significant ethical and legal concerns as unauthorised distribution of copyrighted material contributes to substantial economic losses. For instance the U.S. economy loses at least $29.2 billion annually due to online piracy.

Moreover, a study found that 26% of online content consumers accessed at least one illegal file in three months, highlighting the prevalence of unauthorised content sharing. These statistics underscore the importance of respecting intellectual property rights and ensuring that only rightful owners distribute their work. Integrating AI tools necessitates an ethical review, especially regarding proprietary materials, as organisations invest significant effort in developing intellectual property through research and development.

From an educational perspective, The European Network for Academic Integrity (ENAI) emphasises that while AI can enhance learning, its application must align with ethical standards to maintain academic integrity. Therefore, educational institutions must establish clear guidelines on the ethical use of AI, ensuring that both students and faculty are aware of the implications of uploading and sharing materials without proper authorisation.

Additionally, as AI-generated content gets more sophisticated, the risk of bias and discrimination becomes even more pronounced. Take hiring practices, for example – if AI arbiters rank candidates based on flawed historical data, we might find ourselves perpetuating existing biases, leaving the best candidates behind.

Transparency, interpretable algorithms, and accountability must be core tenets of AI development. The onus is on us to ensure that AI doesn’t inadvertently breed new forms of inequality or perpetuate prejudice through faulty programming. As cybersecurity practitioners often say, ‘trust, but verify.’ If we don’t keep our eyes wide open, it’s easy to slip into complacency amid a sea of automated interactions.

Crafting a Collaborative Future

The increasing prevalence of AI vs AI interactions presents a unique tapestry of opportunities and challenges. From cooperative processes like smart contracts to the continual battle against phishing attacks, we are witnessing a shift in the very fabric of our digital lives.

We are at the dawn of a new era where intelligent machines are not only transforming industries but also prompting critical introspection regarding the ethical ramifications of their interactions. While we are excited by the possibilities, we must also face the challenges head-on.

Let’s ensure that our journey toward a future dominated by AI interactions does not become a laundry list of errors. Instead, we should collectively work to establish a framework where humans and AI co-exist harmoniously. By prioritising transparency, responsibility, and continuous engagement, we can redefine the narrative,[1] ensuring that the most significant conversations happen, not only among machines, but between them and us.

Author

Related Articles

Back to top button