Cloud Computing 14 min read

Bus-Level Data Center Network Technology: RDMA Acceleration and Ultra-Low Latency Innovations

The article examines bus‑level data center network technologies, detailing how RDMA and ultra‑low‑latency forwarding mechanisms reduce end‑to‑end delays, enable high‑performance computing and AI workloads, and drive the evolution toward hyper‑converged, cloud‑native infrastructures.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Bus-Level Data Center Network Technology: RDMA Acceleration and Ultra-Low Latency Innovations

According to Hyperion Research, 28‑38 E‑class supercomputers will be built worldwide between 2021 and 2026, and the discussion is based on the "Bus‑Level Data Center Network Technology Whitepaper".

In traditional data centers, before compute and storage performance improvements, end‑to‑end latency was dominated by compute/storage; after performance gains, the network becomes the bottleneck, as illustrated by the accompanying diagram.

RDMA (Remote Direct Memory Access) enables direct memory access between hosts without involving the CPU or OS, dramatically reducing latency compared to TCP/IP, which suffers from packet loss retransmission and high CPU/memory overhead.

RDMA achieves high performance through several mechanisms:

Zero‑copy – data is transferred directly between buffers without passing through the network stack.

Kernel bypass – transmission occurs in user space, avoiding context switches.

No CPU involvement – remote memory can be accessed without consuming remote CPU cycles.

Message‑based transactions – data is handled as discrete messages rather than streams.

Scatter/gather support – multiple buffers can be sent or received as a single flow.

RDMA is widely deployed in supercomputing, AI training, and storage. Its variants include InfiniBand (IB), RoCE (RDMA over Converged Ethernet), and iWARP, with storage‑specific protocols such as SRP, iSER, and NVMe‑over‑Fabrics (NVMe‑oF) essentially being NVMe‑over‑RDMA.

RoCE has become the dominant RDMA technology due to Ethernet's openness and cost advantages, but traditional Ethernet still suffers from congestion, packet loss, and jitter, limiting its ability to meet high‑performance computing and storage requirements.

The bus‑level data center network (DCN) concept aims to bring end‑to‑end latency down to bus‑level levels, addressing the high latency of conventional Ethernet‑based DCNs that operate at millisecond scales.

Ultra‑low static forwarding latency techniques reduce switch forwarding delay from microseconds to sub‑hundred nanoseconds by using a virtual short address routing mechanism, achieving approximately 100 ns end‑to‑end single‑hop latency, a 6‑10× improvement over mainstream Ethernet chips.

Bufferless, near‑zero‑queue flow‑control technology introduces unscheduled and scheduled packet streams, using token‑based scheduling to achieve sub‑microsecond dynamic latency and near‑zero average queue delay, supporting large‑scale data center traffic without congestion.

For long‑distance lossless transmission (e.g., 100 km at 100 Gbps), a "point‑brake" flow‑control mechanism replaces traditional pause‑frame PFC, using fine‑grained periodic scanning and back‑pressure frames to keep latency well below the 2 ms of conventional solutions.

New network topologies reduce hop count by about 20 % compared to traditional CLOS architectures, improving static latency for high‑performance workloads.

The integrated compute‑network (网算一体) approach reduces the number of network ingress events, mitigating the growing communication bottleneck as GPU compute power outpaces network bandwidth growth.

In summary, bus‑level DCN technologies combine ultra‑low static forwarding, bufferless flow control, and innovative topologies to meet the stringent latency demands of high‑performance computing, AI, and hyper‑converged cloud infrastructures, paving the way for next‑generation data center architectures.

High Performance Computinglow latencyRDMAcloud infrastructurehyper-convergeddata center network
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.