Fundamentals 13 min read

Evolution and Technical Overview of Computer Memory: From SDRAM to DDR4 and Future Directions

This article explains the historical development of computer memory from early bus‑connected modules to integrated northbridge designs, details the characteristics of SDRAM, DDR, DDR2, DDR3, and DDR4 technologies, and discusses future trends in capacity, voltage, and frequency.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Evolution and Technical Overview of Computer Memory: From SDRAM to DDR4 and Future Directions

Early memory modules were connected to the memory bus and the northbridge, which communicated with the CPU via the front‑side bus. Starting with Intel Nehalem, the northbridge was integrated into the CPU, allowing memory to connect directly to the processor.

After AMD adopted the FM1 socket and Intel the LGA 1156 socket, processors integrated the northbridge, eliminating the separate northbridge chip and leaving only the southbridge on the motherboard.

The main system bottleneck is that CPUs are much faster than disks, requiring an intermediate layer—memory—to mediate data exchange; the Harvard architecture separates instruction and data storage.

Memory (RAM) temporarily stores the data processed by the CPU and exchanges data with external storage such as hard disks. All program execution occurs in memory, making its performance critical to overall system speed.

At the end of 1996, SDRAM appeared in systems, designed to synchronize timing with the CPU.

SDR SDRAM (Single Data Rate) can perform one read or write operation per clock cycle; simultaneous read/write requires the previous instruction to finish before the next access.

DDR SDRAM (Double Data Rate) doubles the data rate by transferring data on both the rising and falling edges of the clock, achieving twice the throughput of SDR SDRAM at the same core frequency.

Summary: DDR transfers data on both clock edges, delivering double the data per clock compared to SDR.

DDR2 SDRAM further improves performance with a 4‑bit prefetch (double DDR’s 2‑bit) and an I/O clock that is twice the DDR clock.

Summary: DDR2’s 4‑bit prefetch yields a 4× multiplier (2× for clock × 2× for prefetch).

DDR3 SDRAM introduces an 8‑bit prefetch, operates at 800‑1600 MT/s, reduces voltage to 1.5 V, and adds features such as Automatic Self‑Refresh (ASR) and Self‑Refresh Temperature (SRT) to improve power efficiency and data integrity.

Summary: DDR3’s 8‑bit prefetch provides an 8× multiplier over SDR.

DDR4 SDRAM lowers supply voltage to 1.2 V, increases bandwidth, adds four independent bank groups, and incorporates DBI, CRC, and CA parity to enhance signal integrity and reliability.

In 2017 Intel launched the Purley server platform (Skylake‑based) with up to 28 cores, 6‑channel DDR4 memory, and the UltraPath Interconnect (UPI) bus, offering up to 10.4 GT/s transfer rates.

The future evolution of memory focuses on three main directions: increasing capacity, lowering voltage, and raising frequency.

Capacity growth: 4 GB → 8 GB → 16 GB → 32 GB → 64 GB → … up to 512 GB.

Voltage reduction: 1.5 V → 1.35 V → 1.2 V → …

Frequency increase: 1333 MHz → 1600 MHz → 1866 MHz → 2133 MHz → 2400 MHz → … 3200 MHz.

Main memory manufacturers include DRAM chip makers Samsung, SK Hynix, and Micron, while module vendors such as Ramaxel and Kingston assemble DIMMs from these chips.

Memory has three frequency metrics: core frequency (the actual operating frequency of the memory cell array), clock frequency (the I/O buffer transfer frequency), and effective data transfer frequency (the data throughput rate).

Formulas: Maximum system memory bandwidth = memory rated frequency × bus width × channel count × CPU count. Actual memory bandwidth = memory rated frequency × bus width × actual channel count. Actual memory bandwidth = core frequency × bus width × actual channel count × multiplier.

Example: a DDR3‑1066 module in single‑channel mode (64‑bit bus) yields an actual bandwidth of (1066/8) × 64 × 1 × 8 = 68 224 Mbit.

Memory bandwidth is crucial for CPU‑memory data exchange, and similarly, GPU memory bandwidth is vital for graphics performance.

Current mainstream graphics memory is GDDR5; older GDDR4 and low‑end GDDR3 still exist, while AMD uses HBM, which offers higher I/O width despite lower clock rates.

GDDR5 chips have a 32‑bit I/O, can reach 1750 MHz (4× rate) delivering 7 Gbps and 28 GB/s per chip. HBM runs at 500 MHz (2× rate) with 1 Gbps but compensates with a very wide I/O.

GPU memory bandwidth formula: bandwidth (GB/s) = data frequency (Gbps) × effective bus width (bits) / 8.

Example with NVIDIA GeForce GT 720: using gDDR3 (900 MHz) yields 14.4 GB/s, while GDDR5 (1250 MHz) yields 40 GB/s.

Content sharing ends here; additional server and architecture resources are available in the referenced e‑books.

PerformancehardwareMemoryComputer ArchitectureDDRSDRAM
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.