The internet is becoming divided into separate spaces for humans and AI, driven by a surge in automated bots that change how information is collected, accessed, and trusted. This split introduces new risks for security, control, and information quality, affecting both web publishers and users.
The internet is starting to change in ways that might be easy to miss at first glance. For years, websites were designed for two kinds of traffic: everyday users and webcrawlers, which simply indexed webpages to improve search engine results. Recently, a new kind of traffic has flooded the internet’s lanes, moving faster, scraping harder, and threatening to change the shape of the web as we know it.
AI and LLM indexing bots have sharply increased their web presence in a short period, rising from 2.6% to 10.1% in just eight months. OpenAI’s GPTBot alone reportedly grew 305% over the same timeframe. According to the 2026 AI Bot Impact Report, these bots now account for 52% of all web traffic worldwide.
Unlike standard web crawlers, AI bots are not just helping users discover links. They are gathering huge volumes of content to answer user prompts, train new AI models, and power AI products. In practice, this means more automated requests hitting websites more often, increasing strain on servers.
But this shift has wider implications than just traffic patterns. “If more of the web is shaped around AI collection and AI consumption, we will end up with two different internet layers: one built for people and another built for machines,” said Ignas Anfalovas, Senior Engineering Manager of IPXO. The company provides solutions for IP address infrastructure, helping support network stability and reduce the risks associated with IP abuse, including from malicious AI-driven bot traffic. “This kind of split internet would affect how web content is accessed, who gets paid for it, and how much trust users can place in what they find online.”
Rising bot traffic puts pressure on operating costs and quality
The surge in AI bot traffic brings new challenges around security, consent, and site management. Unlike standard web crawlers, many AI systems bypass standard web permissions or robots.txt files, which tell bots what pages they can and cannot access. This means AI bots can flood sites with traffic without any regard for resource limits. For smaller publishers and platforms, this behavior causes higher cloud and server bills.
“When traffic shifts from human users to automated collectors, suddenly the main focus is not just serving your audience but trying to balance requests from both humans and machines,” says Anfalovas. “This shift may affect not only infrastructure costs but also reduce the incentive to make sites user-friendly, since many bots do not recognize or care about details of web design, only content.”
As the share of AI-driven visits grows, website owners will have less reason to invest in visual quality or interactive features, which have little impact on how automated systems read and process web content. This will lead to a decline in the overall quality and accessibility of internet spaces, leaving real users with fewer compelling reasons to browse or interact online.
Rising trust and accountability issues
When web content is designed and optimized for AI systems, there is also a risk of creating a feedback loop, where AI models are primarily trained on content generated by and for other AI models. In such a closed circuit, mistakes and hallucinations get amplified and the quality of online information degrades over time. It is estimated that already over 50 percent of online articles are AI-generated.
“If pages are built almost entirely from automated summaries or recycled data made by older models, search results will just fill up with duplicated or incomplete information,” says Anfalovas. “With that kind of environment, original reporting and expert sources will be much harder to identify, let alone verify.”
This problem is compounded by the emergence of the so-called agentic web, where AI agents perform tasks and take actions on behalf of users, not just retrieving information, but conducting in-depth research and even making purchases. In this new environment, AI systems act as digital intermediaries between human users and online platforms.
“As users put more trust in such systems, it becomes hard to know whether outcomes and actions reflect a person’s wishes or an AI’s interpretation,” adds Anfalovas. This change raises challenges in how we audit and trace online actions, blurring the lines around accountability and trust. “In the past, an executive could say something like, ‘Oh, my assistant sent the wrong invitation.’ Now, a similar pattern could emerge with AI agents: ‘The agent made that decision, not me.’ So, who is ultimately responsible?”
A split internet and its risks
Looking ahead, the web is likely to split into two main layers: one designed around user experience, and another optimized for automated bots. In this model, content for AI will be packaged differently, filtered, and delivered through exclusive feeds or even paid channels. This shift will change the business model for publishers and make information access, security, and control much more complicated.
A deeper problem that can arise is increased vertical integration, where the same provider controls both the AI tool and the platform hosting the data or service. For example, when large tech companies power both the decision-making AI system and the backend that carries out those decisions, it can become harder to determine if things are optimized for the user’s benefit or for the provider’s. Users may not even realize when services quietly shift in ways that better suit the platform than themselves.
The more internet tools are built just for AI agents, the less transparent the process becomes. Control over access, resources, and outcomes may rest with only a few companies, limiting visibility for outside parties and increasing instances of biased and one-sided content streams. With these changes, smaller sites and independent voices may struggle to keep up or get noticed. The design decisions made now will affect not only how data is shared or sold but also whose interests are served and whose information stays visible to both people and machines.
