AI

Scaling for the Future: Why Neoscalers Need Operational Expertise to Build the Networks of Tomorrow”

By Greg Friesen, Vice President of Global Services and Support, Ciena

AI is reshaping everything in our industry. It is not simply adding another application to the network but redefining the way we build and manage the underlying architecture that provides our global connectivity. As AI’s footprint expands across every geography and industry, the network must transform from a passive transport layer into an adaptive, high-performance fabric engineered for machine-scale thinking.  Massive GPU clusters will pepper the globe, and a global AI network fabric will emerge with high-capacity optical transport of between 400G and 1.6T that interconnects these GPU clusters and entire data centers that span geographies.  

But while we default to thinking hyperscalers are those responsible for building out this global AI network fabric, there is a new class of entrants emerging, known as neoscalers, which have an essential role in enabling the future of AI.  

The rise of neoscalers 

Neoscalers offer AI infrastructure, such as GPU-as-a-service and LLM operations platforms and they compete on GPU availability, price, performance, and low-latency networking. Compared to hyperscalers, neoscalers tend to be smaller and more regionally focused.  Their customers are often AI labs, startups, and enterprises that need scalable GPU clusters for AI model training/inference without buying hardware.  

Neoscalers will play a key role in the critical layer of the world’s AI network fabric. The GPU-as-a-Service market alone was valued at just over $6.5bn in 2024 but is forecast to hit $26.62 billion by 2030. This growth underscores just how indispensable high-performance, on-demand compute will become in the years ahead. 

As any new entrant will appreciate, the neoscaler business and technology approach leads to challenges unique to their relatively nascent offering, particularly when creating a network from scratch. They need bandwidth, but how they go about acquiring it is not as simple as it sounds.  

Scaling the network 

One of the biggest challenges neoscalers face is scaling network capacity between data centers while keeping costs down. Without a high-performance network, they simply can’t move data fast enough to support large-scale AI workloads. To address this challenge, neoscalers need three core capabilities: lightning-fast data transmission, ultra-low latency, and zero-trust resilience.  

Some have relied on leasing network capacity, but soaring bandwidth demands are exposing some limitations in that approach, with traffic outpacing capacity, a reduction in control and rising costs.  

The speed at which they deploy networks is critical. To seize emerging AI opportunities and meet customer demands, neoscalers need networks that foster growth and innovation, not become a bottleneck.  

To build or to buy? 

There are two main ways to deploy a network capable of delivering on the promise of AI. Neoscalers can buy a managed optical fiber network (MOFN) – also known as a private dedicated network, or PDN – from a service provider. Alternatively, they can build their own network using dark (unlit) fiber. A combination of both approaches is also possible, such as initially buying a PDN while planning to build a private network later. 

Each approach has benefits and potential downsides. Buying a network can be expensive and lacks control. Building a network may be more cost-effective but requires expertise in high-capacity optical networks, a skillset some neoscaler teams may not have.  

This is where managed services advisers come in and play a key role, guiding neoscalers through planning, design, development, and testing of their network. By engaging advisers early, neoscalers can ensure their networks are designed with long-term goals in mind. The network becomes a vacant plot of land, and the adviser acts as the architect and builder, ready to construct the dream home. 

During the advisory stage, neoscalers should seek advice to determine the best approach, whether it’s buy or build, or even a dual track strategy. The key is understanding how to scale intelligently, and managed services advisers provide that guidance. 

Ongoing Network Operations 

Once the network is built or procured, it will require ongoing operation.  

High-speed, high-traffic networks need a secure, geo-diverse and premium operations model, the foundation of which is the network operation center (NOC) to monitor and manage incidents and problems, including coordinating with fiber providers. This also includes engineering dispatch and managed spares to help cost effectively handle all aspects of inventory and logistics for replacement parts and the ability to quickly ship and replace defective equipment in the network.  Networks also need robust, round-the-clock technical support. This can be done remotely, though some providers may even offer the opportunity to embed one of their engineers within the team itself to provide troubleshooting and issue escalation.  

Regardless of the path (or paths) chosen, an operations strategy is a non-negotiable which allows the neoscaler to scale and respond to field incidents.  But as with any networking architecture, the required scope of the Operations model must be flexible to address the unique needs of the network; there’s not a one-size-fits-all solution. Choosing the right, flexible operations solution is critical to ensuring high availability and uptime.  

A blank canvas to enable AI at scale 

Neoscalers will be essential in expanding the global AI network fabric beyond the hyperscalers, and the performance of their underlying networks will determine how successfully they do so. Whether starting from scratch or currently leasing capacity, they have a unique opportunity to design and deploy networks that precisely meet their requirements.  

Engaging professional services advisers with deep networking expertise can ensure that every decision — from planning and design to deployment and operations — is aligned with the neoscaler’s long-term goals. The network shouldn’t be a hindrance to the neoscalers’ ultimate goal of delivering GPU-as-a-Service, and with support from the outset, it won’t be. 

 

Author

Related Articles

Back to top button