Tag

NVLink

0 views collected around this technical thread.

Architects' Tech Alliance
Architects' Tech Alliance
May 26, 2025 · Artificial Intelligence

NVLink Fusion: NVIDIA’s High‑Bandwidth Interconnect for Heterogeneous AI Computing

NVLink Fusion, unveiled at Computex 2025, extends NVIDIA’s NVLink technology to enable high‑bandwidth, low‑latency connections between CPUs and GPUs or third‑party accelerators, offering up to 900 GB/s bandwidth, flexible heterogeneous configurations, ecosystem expansion, performance gains for AI training and inference, and potential cost reductions.

AICPUGPU
0 likes · 12 min read
NVLink Fusion: NVIDIA’s High‑Bandwidth Interconnect for Heterogeneous AI Computing
Architects' Tech Alliance
Architects' Tech Alliance
Apr 28, 2025 · Artificial Intelligence

NVLink High‑Speed Interconnect: Architecture, Evolution, and Performance

NVLink, NVIDIA's high‑bandwidth interconnect introduced with the P100 GPU, replaces PCIe by offering significantly higher data rates and lower latency for GPU‑GPU and GPU‑CPU communication, and has evolved through multiple generations to support modern AI and high‑performance computing workloads.

AI accelerationGPU interconnectHigh Performance Computing
0 likes · 9 min read
NVLink High‑Speed Interconnect: Architecture, Evolution, and Performance
Architects' Tech Alliance
Architects' Tech Alliance
Sep 23, 2024 · Artificial Intelligence

Venado Supercomputer: Architecture, Performance, and Design Insights

The Venado supercomputer, built for Los Alamos National Laboratory, combines Nvidia Grace CPUs with Hopper GPUs, leverages high‑bandwidth memory and Slingshot interconnects, and targets a balanced 80/20 CPU‑GPU workload split to support demanding AI and HPC applications.

Grace CPUHPCLos Alamos
0 likes · 13 min read
Venado Supercomputer: Architecture, Performance, and Design Insights
Architects' Tech Alliance
Architects' Tech Alliance
Jul 7, 2024 · Operations

Overview of Popular GPU/TPU Cluster Networking Technologies: NVLink, InfiniBand, RoCE, and DDC

This article reviews the main GPU/TPU cluster networking solutions—including NVLink, InfiniBand, RoCE Ethernet, and DDC full‑schedule fabrics—examining their latency, loss‑free transmission, congestion control, cost, scalability, and suitability for large‑scale LLM training workloads.

AI trainingDDCGPU networking
0 likes · 16 min read
Overview of Popular GPU/TPU Cluster Networking Technologies: NVLink, InfiniBand, RoCE, and DDC
IT Services Circle
IT Services Circle
Jun 6, 2024 · Artificial Intelligence

Nvidia Unveils Blackwell GPU and AI Supercomputing Roadmap

Nvidia’s latest Blackwell GPU, presented by Jensen Huang, promises unprecedented performance and energy efficiency for large‑scale AI models, while the company also showcases accelerated computing, NVLink interconnects, AI‑optimized DGX servers, the NIM platform for rapid LLM deployment, and ambitious projects such as Earth‑2 digital twins and next‑generation embodied AI robots.

AIAccelerated ComputingBlackwell
0 likes · 18 min read
Nvidia Unveils Blackwell GPU and AI Supercomputing Roadmap
Architects' Tech Alliance
Architects' Tech Alliance
May 15, 2024 · Artificial Intelligence

Detailed Overview of GPU Server Architectures: A100/A800 and H100/H800 Nodes

This article provides a comprehensive technical overview of large‑scale GPU server architectures, detailing the component topology of 8‑GPU A100/A800 and H100/H800 nodes, explaining storage network cards, NVSwitch interconnects, bandwidth calculations, and the trade‑offs between RoCEv2 and InfiniBand for AI workloads.

AI trainingGPUHigh Performance Computing
0 likes · 13 min read
Detailed Overview of GPU Server Architectures: A100/A800 and H100/H800 Nodes
Architects' Tech Alliance
Architects' Tech Alliance
May 14, 2024 · Fundamentals

Fundamentals of GPU Computing: PCIe, NVLink, NVSwitch, and HBM

This article provides a comprehensive overview of the core components and terminology of large‑scale GPU computing, covering GPU server architecture, PCIe interconnects, NVLink generations, NVSwitch, high‑bandwidth memory (HBM), and bandwidth unit considerations for AI and HPC workloads.

AI hardwareGPU computingHBM
0 likes · 11 min read
Fundamentals of GPU Computing: PCIe, NVLink, NVSwitch, and HBM
Architects' Tech Alliance
Architects' Tech Alliance
Apr 2, 2024 · Artificial Intelligence

Evolution and Forecast of Nvidia NVLink, NVLink C2C, and B100/X100 GPU Architectures

The article analyses the historical evolution of Nvidia's NVLink and NVLink C2C interconnect technologies, compares them with PCIe, Ethernet and InfiniBand, and uses these trends to predict future AI‑chip architectures such as the B100 and X100 GPUs, highlighting design trade‑offs and packaging challenges.

AI chipB100GPU architecture
0 likes · 15 min read
Evolution and Forecast of Nvidia NVLink, NVLink C2C, and B100/X100 GPU Architectures
Architects' Tech Alliance
Architects' Tech Alliance
Dec 24, 2023 · Artificial Intelligence

Overview of Popular GPU/TPU Cluster Networking Technologies for LLM Training

This article examines the main GPU/TPU cluster networking options—including NVLink, InfiniBand, RoCE Ethernet Fabric, and DDC full‑schedule networks—explaining their latency, loss‑less transmission, congestion control, cost, scalability, and suitability for large‑scale LLM training workloads.

GPU networkingHigh Performance ComputingInfiniBand
0 likes · 18 min read
Overview of Popular GPU/TPU Cluster Networking Technologies for LLM Training
Architects' Tech Alliance
Architects' Tech Alliance
Aug 21, 2023 · Artificial Intelligence

AI Compute Landscape: GPU Architectures, Tensor Cores, NVLink, and Scaling Challenges

The article surveys the AI compute ecosystem, explaining why CPUs are unsuitable for AI workloads, how heterogeneous CPU‑plus‑accelerator designs dominate, and detailing the evolution of NVIDIA GPUs, Tensor Cores, memory technologies, and inter‑GPU networking that enable large‑scale model training.

AI computeAI hardwareGPU architecture
0 likes · 11 min read
AI Compute Landscape: GPU Architectures, Tensor Cores, NVLink, and Scaling Challenges
Architects' Tech Alliance
Architects' Tech Alliance
Jan 30, 2023 · Fundamentals

NVIDIA Grace CPU Superchip: Architecture, Performance, and Key Features

The article provides a detailed overview of NVIDIA's Grace CPU Superchip, describing its Arm‑based architecture, NVLink‑C2C interconnect, scalable coherency fabric, high‑bandwidth LPDDR5X memory, extensive I/O options, and software ecosystem, highlighting its suitability for HPC and AI workloads.

AIArm NeoverseCPU
0 likes · 10 min read
NVIDIA Grace CPU Superchip: Architecture, Performance, and Key Features
Architects' Tech Alliance
Architects' Tech Alliance
Dec 30, 2020 · Artificial Intelligence

Understanding GPUs, AI Accelerators, and Market Trends

The article explains GPU evolution, its integration with CPUs, interconnect technologies like PCIe and NVLink, market shares of NVIDIA, AMD and Intel, AI accelerator types (GPU, FPGA, ASIC), and the roles of training and inference in cloud AI, while also promoting a paid 182‑page PPT resource.

AI acceleratorGPUHPC
0 likes · 7 min read
Understanding GPUs, AI Accelerators, and Market Trends
Architects' Tech Alliance
Architects' Tech Alliance
Oct 28, 2020 · Artificial Intelligence

Understanding NVIDIA NVLink: Architecture, Features, and Applications

The article introduces NVIDIA’s third‑generation NVLink technology, detailing its high‑bandwidth GPU‑GPU and GPU‑CPU interconnect, key architectural breakthroughs such as the Ampere‑based A100 GPU, multi‑instance GPU, and NVSwitch, and discusses its impact on AI, HPC, and graphics workloads.

Artificial IntelligenceGPU interconnectHigh Performance Computing
0 likes · 7 min read
Understanding NVIDIA NVLink: Architecture, Features, and Applications
Architects' Tech Alliance
Architects' Tech Alliance
Feb 2, 2019 · Artificial Intelligence

An Overview of NVIDIA NVLink: Architecture, Topology, and Performance

This article explains NVIDIA's NVLink interconnect technology, covering its history, protocol layers, bandwidth advantages over PCIe, topologies such as the HGX-1/DGX-1 mesh, the NVSwitch extension, and performance gains for deep‑learning and high‑performance computing workloads.

AI accelerationGPU interconnectHigh Performance Computing
0 likes · 7 min read
An Overview of NVIDIA NVLink: Architecture, Topology, and Performance