Press Release

SK Telecom and Panmnesia Sign Partnership to Innovate AI Data Center Architecture, Enhancing Cost Efficiency and Performance

โ€œCXL-Based AI Rackโ€ to Be Built and Validated

BARCELONA, Spain–(BUSINESS WIRE)–Panmnesia, an AI infrastructure link solution provider, today announced the signing of a strategic partnership with SK Telecom (SKT), South Koreaโ€™s largest telco and a leading AI company. The agreement, signed at MWC26 in Barcelona, aims to jointly develop a CXL-based next-generation AI data center (DC) architecture.




As large-scale AI services grow, AI DCs are expanding GPU deployments, driving up costs. Recognizing the need for sustainable scalability, SKT and Panmnesia are focusing beyond simple GPU expansion to technologies that enable more efficient utilization of existing computing resources. Through this collaboration, the two companies aim to improve cost efficiency and performance by innovating DC interconnect architecture based on Compute Express Link (CXL)*.

* CXL is a high-speed, low-latency interconnect standard that organically connects CPUsโˆ™GPUsโˆ™memory, enabling flexible expansion and utilization of computing resources beyond traditional server boundaries.

Background: Limitations of Modern AI DC Architectures

Modern AI DC typically configure servers with fixed ratios of CPUsโˆ™GPUsโˆ™memory. Multiple servers are connected via networks to form racks, and multiple racks are interconnected to build AI DC. However, as AI models become increasingly diverse and larger in scale, this architecture faces limitations in terms of cost-to-performance efficiency.

To address these challenges, the two companies propose:

1. Breaking away from rigid, monolithic server architecture.

2. Replacing traditional network-based interconnects with CXL

Challenge #1

In conventional AI DC, CPUsโˆ™GPUsโˆ™memory are statically bundled within individual servers. As a result, unused resources in one server cannot easily be utilized by others. In particular, when memory capacity becomes insufficient, additional GPUsโ€”often unnecessaryโ€”must be deployed alongside it, creating inefficiencies. This structure lowers GPU utilization rates and increases both capital and operational expenditures.

To solve this issue, SKT and Panmnesia propose a disaggregated architecture in which computing resources are separated by type and flexibly composed as needed. Instead of being confined within servers, CPUsโˆ™GPUsโˆ™memory are interconnected at the rack level through a CXL Fabric Switch**. By dynamically allocating only the resources required for each AI workload, this approach minimizes unnecessary resource waste and maximizes cost efficiency.

**Fabric Switch is a device that flexibly interconnects multiple system devices while managing data flow between them.

Challenge #2

The companies will also improve computational efficiency by fundamentally changing the interconnect mechanism. In conventional AI DC, GPU collective operationsโ€”essential for large-scale AI training and inferenceโ€”rely on general-purpose networks such as Ethernet. This introduces data copies and software intervention, degrading performance.

To address this limitation, SKT and Panmnesia will eliminate network involvement in computational paths and transition to CXL. By utilizing CXL, it is able to interconnect resources without traversing conventional networks.

At the core of this architecture is the Link Controller, an electronic component that can be integrated into CPUsโˆ™GPUsโˆ™AI acceleratorsโˆ™memory devices. Within each device, it enables direct communication over CXL, replacing data transfer that previously required multiple data copies into simple memory access operations. Furthermore, the architecture enables GPU-to-GPU and GPU-to-memory communication without software intervention, significantly improving processing efficiency. As a result, AI DC can deliver higher performance without adding more GPUs.

Collaboration Details

Under this collaboration, SKT will lead the design of an architecture optimized for real-world deployment, leveraging its large-scale AI DC construction and operational expertise, along with its experience in AI model development and commercialization.

Panmnesia will implement a CXL-Based AI Rack by applying its link solutionsโ€”including CXL Fabric Switches that serve as the core of physical connectivity and Link Controllers responsible for logical integration. Through this approach, the link architectureโ€”previously confined within individual serversโ€”will be extended beyond server boundaries to the rack level and above.

The two companies plan to validate the next-generation AI DC architecture by running real AI models and comprehensively evaluating GPU and memory utilization, latency, and throughput by the end of this year. Following this, they intend to conduct proof-of-concept deployments in large-scale AI DC environments and pursue commercialization and business expansion.

Suk Geun Chung, Head of AI CIC at SKT, stated, โ€œThe competitiveness of AI DC now extends beyond GPU performance alone and depends on system-level optimization encompassing memory and data flow. This collaboration will help alleviate the structural bottleneck known as the โ€˜Memory Wall,โ€™ where data movement and supply cannot keep pace with increasing computational performance, thereby enhancing both the performance and economic efficiency of AI DC.โ€

Myoungsoo Jung, CEO of Panmnesia, said, โ€œNext-generation AI infrastructure will be defined not by the performance of individual devices, but by the architecture created through diverse link semiconductors. Together with SKT, we aim to present a high-efficiency AI DC model that will set a new standard in the global market.โ€

Availability

Panmnesiaโ€™s partners can request CXL Fabric Switches (including PCIe 6.4/CXL 3.2 switch samples) and Link Controllers (including PCIe 6.4/CXL 3.2 controllers) utilized in this collaboration project. Link Controllers are available either as IP or as custom silicon solutions.

Panmnesia is advancing toward deployment readiness beyond the prototype stage by conducting long-duration operational testing in real-world AI computational environments to verify data transmission stability and interoperability.

Companies incorporating Panmnesiaโ€™s link technology into their CPUโˆ™GPUโˆ™AI acceleratorโˆ™memory devices are expected to further strengthen their competitiveness in the AI DC market by establishing system-level integrated reliability that extends beyond validation at the individual device level.

For more information about samplesโˆ™productsโˆ™partnership, please contact [email protected].

Contacts

Media Contact:

Name: Hanyeoreum Bae

Email: [email protected]
Website: https://panmnesia.com/

Author

Related Articles

Back to top button