Fundamentals 11 min read

Server Hardware Architecture Overview: Chipsets, PCIe Evolution, and Data‑Center Design

This article provides a comprehensive technical overview of server hardware architecture, covering the evolution from dual‑chip (MCH + ICH) to single‑chip (PCH) designs, detailed PCIe specifications, PCB and cooling considerations, and the power‑supply strategies used in modern data‑centers.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Server Hardware Architecture Overview: Chipsets, PCIe Evolution, and Data‑Center Design

Servers are high‑performance computers that provide services to client machines. The article begins with a reference to a 182‑page PPT covering fundamental server concepts.

Before 2012 most motherboard chipsets used a dual‑chip architecture (Memory Controller Hub + I/O Controller Hub). Intel’s 2012 shift to a single‑chip architecture integrated the north‑bridge into the CPU and renamed the south‑bridge to Platform Controller Hub (PCH), a trend that continues as PCH moves further onto the CPU die.

The north‑bridge (NB) handled communication between the CPU, memory, accelerated graphics (AGP/PCIe) and other high‑speed buses, while the south‑bridge (SB/PCH) managed I/O functions such as PCIe, USB, SATA, audio, keyboard, and power management, positioned lower on the board to simplify routing.

PCIe is the primary high‑speed I/O bus, succeeding ISA and PCI/PCI‑X. Versions 1.0‑5.0 have increased lane speeds from 2.5 GHz (250 MB/s) to 8 GHz (1 GB/s), with PCIe 4.0 offering up to 64 GB/s and PCIe 5.0 up to 128 GB/s per x16 link. The PCI‑SIG, originally led by Intel, governs the standards, with releases roughly every three years.

PCIe lanes can be combined (x1, x2, x4, x8, x16) to meet bandwidth needs of GPUs, AI accelerators, network cards, and storage devices. The article also details PCB layer requirements for different PCIe generations, material loss‑factor (Df) targets, and typical dimensions for dual‑socket server boards (≈45 cm × 45 cm).

Data‑center power is supplied via UPS (AC‑to‑DC conversion with battery backup) or HVDC systems, the latter offering higher efficiency, smaller footprint, and lower cost. Cooling has shifted from air to liquid solutions (cold plates, immersion, spray) to handle increased CPU power.

Overall, the piece combines hardware fundamentals, chipset evolution, bus standards, PCB design, and data‑center power and cooling considerations to give a holistic view of modern server platforms.

architectureCPUdata centerServer HardwarePCIechipsetPCH
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.