Comprehensive Overview of InfiniBand Technology and Architecture
This article provides an in‑depth examination of InfiniBand, covering its rapid development as a high‑bandwidth, low‑latency interconnect technology, the InfiniBand Trade Association, detailed packet structures, layered architecture, switching mechanisms, and a comparative analysis with Ethernet, highlighting its advantages for high‑performance computing.
InfiniBand is one of the fastest‑growing high‑speed interconnect network technologies, offering high bandwidth, low latency, and scalability. This article explores InfiniBand packet structures, data transmission, hierarchical architecture, comparisons with Ethernet, switching mechanisms, and future development prospects.
1. Introduction With the rapid increase in CPU performance, high‑speed interconnect (HSI) has become crucial for high‑performance computing (HPC). InfiniBand, developed under the InfiniBand Trade Association (IBTA), is a high‑performance, low‑latency technology that has grown faster than other HSI solutions.
2. InfiniBand Trade Association (IBTA) Founded in 1999, IBTA unites industry leaders such as HP, IBM, Intel, Mellanox, Oracle, QLogic, and Dell to oversee compliance and interoperability testing and to advance InfiniBand specifications.
3. InfiniBand Overview InfiniBand provides point‑to‑point switched communication for processors and I/O devices, supporting up to 64,000 addressable devices. Its architecture defines a standard for subnet, end‑nodes, switches, links, and subnet managers, offering universal, low‑latency, high‑bandwidth, and low‑cost connectivity.
InfiniBand inherits bus‑level bandwidth and low latency, implementing RDMA (Remote Direct Memory Access) to transfer data directly between memories without OS involvement, reducing CPU overhead and latency.
InfiniBand’s system components include channel adapters (Host and Target), switches, routers, cables, and connectors. Switches provide subnet management, performance monitoring, and board management functions.
4. InfiniBand Packets and Data Transmission An InfiniBand packet consists of Local Route Header (LRH), Global Route Header (GRH), Base Transport Header (BTH), Extended Transport Header (ETH), Payload (PYLD), Invariant CRC (ICRC), and Variant CRC (VCRC). The packet uses a 128‑bit IPv6‑style address for source and destination identification.
5. InfiniBand Architecture Layers The architecture comprises physical, link, network, and transport layers. The physical layer provides signal, power, and encoding; the link layer handles address, buffering, flow control, and error detection; the network layer routes packets across subnets; the transport layer adds headers for reliable communication.
6. Switching Mechanism InfiniBand employs a switched fabric architecture with features such as output port selection, VL selection, credit‑based flow control, support for unicast/multicast/broadcast, partitioning, error checking, and VL arbitration. Major vendors include Mellanox, QLogic, Cisco, and IBM.
7. Comparison with Ethernet InfiniBand surpasses Ethernet in data transfer speed and latency, making it ideal for HPC. Its cost‑effectiveness and growing market share in top‑100 supercomputers underscore its dominance over Ethernet.
8. Conclusion InfiniBand is poised to replace 10/40‑Gb Ethernet as the preferred high‑speed interconnect, with future growth in GPU, SSD, and clustered database applications. IBTA predicts rapid market demand for FDR, EDR, and HDR technologies, aiming for bandwidths up to 1000 Gbps by 2020.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.