AI & Technology

AI Vacuums: What They Are and Why

By Jonathan Armstrong

Artificial intelligence is dominating conversations everywhere right now, but what exactly are AI vacuums, and why could they pose a risk to organisations? 

You’d have to be completely disconnected from the internet to miss the changes affecting major search engines. Increasingly, search engines are moving away from traditional paid-for or organic search results, prioritising AI-generated summaries. Today, many search engines display an AI-generated answer at the top of results, which users are finding increasingly credible. For instance, a recent YouGov study found that over 50% of respondents prefer AI summaries to traditional search listings. Click-through rates are also high, as these AI summaries typically include links to source content to validate the information. 

This shift has significant implications not only for search engine revenue models but also for information security, compliance and legal risk. 

Understanding the Risk 

Internet scams have existed for as long as the internet itself. Historically, attackers diverted traffic from legitimate sites through typo-squatting, misleading domains, metatag misuse, or paid search manipulation. As user search behaviour has evolved, scammers have adapted too. 

Many of these earlier scams relied on businesses not having a strong online presence. Where a digital information gap existed, attackers could exploit it to capture traffic for their own purposes. AI-first search creates a similar environment, where information vacuums can be exploited. 

With AI-first search, a number of risks emerge: 

  • Manipulated AI summaries could redirect users to scam sites, hijacking an organisation’s reputation or potential customers. 
  • Investment and employment scams could be amplified through AI-generated content. 
  • Credential phishing could be reinforced using fraudulent AI-informed pages. 

The low cost of AI makes these attacks more feasible and scalable. According to Nina Schick, three years ago a million tokens of AI inference cost $60; today, the same computing power costs just six cents. This reduction enables threat actors to experiment at scale and probe for vulnerabilities more efficiently. 

So far, most public examples of AI vacuums being exploited have been light-hearted or humorous, such as generating absurd recipes from Reddit posts. However, the potential for serious harm exists due to the way AI-first search functions. 

Why AI Vacuums Are Especially Concerning 

Earlier generative AI models were trained on restricted datasets, such as Common Crawl. Modern models now access broader datasets, including websites that allow AI crawling. However, AI model operations often lack transparency. For example, on 9 December 2025, the European Commission opened an investigation into Google over concerns about the data used to train its GenAI models. 

Efforts by organisations to protect their intellectual property can sometimes worsen the problem. Most reputable crawlers—like OpenAI’s GPTBot, Google-Extended, and Anthropic’s ClaudeBot—respect technical measures such as robots.txt files, which tell bots which parts of a site they can access. But if an organisation restricts AI access too heavily, information vacuums can appear. 

A key concern is that many brands are underrepresented or invisible in AI summaries. The GEOMETRIQS study (October 2025) found that among the top 80 brands analysed, average visibility was only 4%, with one in five brands completely absent. Financial services were the second-worst sector at 2.9%, suggesting increased risk of financial scams. Brands outside Anglo-American markets fared worse. 

How Organisations Can Respond 

To mitigate these risks, organisations should review their AI strategy and risk profile. Suggested measures include: 

  1. Monitoring AI-generated results regularly, as AI search outputs can change frequently. 
  2. Developing an AI optimisation strategy, similar to traditional SEO, including reviewing robots.txt configurations and making content AI-friendly. Employ Generative Engine Optimisation (GEO) and E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) principles to improve credibility. 
  3. Integrating AI risk management into brand protection, covering domain monitoring, trademark enforcement, and other reputation safeguards. 
  4. Promoting AI literacy internally. Educating staff on AI risks and opportunities aligns with EU AI Act requirements and strengthens mitigation strategies. 

Further Information 

  1. EU AI Act and AI literacy: The EU Artificial Intelligence (AI) Act | FAQs 
  2. European Commission investigation: Commission opens investigation into possible anticompetitive conduct by Google 
  3. GEOMETRIQS study: GEOMETRIQS October 2025 report 
  4. Recent Punter Southall Law AI projects: Artificial Intelligence (AI) Lawyers 

 

Author

Related Articles

Back to top button