What Is UALink? The Open High‑Performance Interconnect Shaping AI Accelerator Clusters

UALink is an open, high‑performance interconnect standard designed to link thousands of AI accelerators, offering NVLink‑level bandwidth, low latency, scalability, cost efficiency, and flexible topologies to meet the demanding communication needs of modern AI workloads.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
What Is UALink? The Open High‑Performance Interconnect Shaping AI Accelerator Clusters

Background and Goals of UALink

As AI models grow explosively in size, the required compute clusters and interconnect performance have reached unprecedented levels. Proprietary solutions like NVIDIA's NVLink limit participation and foster ecosystem fragmentation. UALink was created to address these challenges by providing an open, high‑performance, and scalable interconnect standard for large AI accelerator clusters.

The main objectives of UALink are:

Open standard: any vendor can adopt and contribute, fostering ecosystem innovation.

High performance: bandwidth and latency comparable to or exceeding NVLink.

Scalability: support for connecting thousands to tens of thousands of accelerators.

Cost‑effectiveness: standardization reduces development and deployment costs.

Flexibility: multiple topologies and communication modes to suit diverse AI workloads.

UALink Architecture and Protocol Stack

UALink 1.0 defines a data rate of up to 200 GT/s per lane, with a signaling rate of 212.5 GT/s, meeting the bandwidth needs of Ethernet Layer 1 FEC and additional encoding. Channels can be configured as x1, x2, or x4, forming a station that provides up to 800 Gbps in each direction. This flexible configuration allows the number of accelerators and per‑accelerator bandwidth to be scaled according to application demands.

The UALink switch (ULS) can connect up to 1 024 accelerators or endpoints, assigning each a unique 10‑bit routing identifier. Virtual Pods group one or more accelerators for isolated communication within a larger Pod, enabling fine‑grained resource management.

UALink’s protocol stack consists of four layers:

Physical layer: Reuses IEEE 802.3dj Ethernet, supporting 106.25 GT/s (low‑speed) or 212.5 GT/s (high‑speed) modes, corresponding to 100 G–800 G bandwidth configurations. Enhanced coding and FEC reduce latency.

Data link layer: Aggregates 64‑byte transaction flits into 640‑byte flits for the physical layer and provides messaging services for rate advertisement, device/port discovery, and firmware communication.

Transaction layer: Converts protocol messages from the UALink Protocol Layer Interface (UPLI) into transaction‑layer flits (TLFlit) and vice‑versa, employing address caching to improve efficiency.

Protocol layer: The topmost layer handles accelerator‑to‑accelerator messaging with a symmetric protocol, processing messages through multiple functional sub‑layers.

By offering an open, high‑performance interconnect, UALink aims to drive a technological revolution in AI accelerator networking and lay a solid foundation for future ultra‑large AI compute infrastructures.

UALink architecture diagram
UALink architecture diagram
Protocol stack diagram
Protocol stack diagram
UALink switch illustration
UALink switch illustration
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

High‑Performance NetworkingProtocol StackAI interconnectUALinkaccelerator clustersopen standard
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.