How Data Is Stored: An Overview of RAM, DRAM, and Memory Controllers
This article explains the fundamentals of data storage in computers, covering the concepts of RAM and DRAM, the role of capacitors and transistors, and how memory controllers and CPU caches work together to manage and accelerate access to binary information.
Author: Xiao Dao Xiao Xi Link: https://www.jianshu.com/p/0aa5c09b2a6b
1. How Data Is Stored
Because data preservation is crucial, scientists have long contemplated how to store data using electronic circuits.
If a device continuously outputs a high voltage, that's a 1; if it continuously outputs low, that's a 0, and it must be able to switch freely between 0 and 1.
Next, the memory module makes its debut.
2. RAM
Memory modules are more formally called RAM (Random Access Memory) because they allow random read/write access to any location.
Modern computers operate in binary; all data and instructions are represented as strings of 0s and 1s.
To store a single bit, early engineers considered two circuit approaches; the first is the static scheme:
It may seem complex, but the advantage is that it can stably maintain a state between 0 and 1, thus called static SRAM (Static Random Access Memory).
However, it requires many transistors per bit, making large capacities like 16 GB costly and physically large, unsuitable for limited motherboard space.
The second approach is:
This is simpler: a single capacitor’s charge determines a 1 or 0, storing one bit.
Each storage chip contains many such bit cells; for example, a 16 GB memory module has 137,438,953,472 bits, i.e., that many capacitors.
However, the capacitor leaks charge over time, causing voltage to drop so the stored 1 or 0 becomes indistinguishable.
To solve this, periodic refreshing is required to recharge the capacitors, known as dynamic data refresh, leading to DRAM (Dynamic Random Access Memory).
3. Memory Controller
When reading data, we must specify which bit by providing chip number, bank, row address, and column address.
These details are cumbersome, so they are abstracted behind a simple interface—the memory controller.
The memory controller acts as an intermediary between the CPU and the memory module.
In memory cells, capacitors leak and must be refreshed at least every 64 ms, a task managed by the memory controller.
Data on a memory module resides on multiple chips, each divided into banks and further into bit cells; accessing a specific bit requires specifying chip, bank, row, and column.
The CPU prefers not to handle this complexity, so the memory controller handles it.
Using bits directly is cumbersome, so we group 8 bits into a byte; the CPU assigns addresses to memory, and the memory controller translates these addresses to specific chip, bank, row, and column locations for read/write.
Thus a single address suffices; the controller maps it to the physical location, performs the access, and returns the data.
Due to its importance, the memory controller is now integrated into the CPU.
As CPUs became faster, memory became a bottleneck; to mitigate this, CPUs include internal caches that store frequently accessed data, reducing the need to fetch from main memory.
However, CPU caches are limited in size, holding only a subset of data; most data still resides in main memory.
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.