Databases 17 min read

Modern Processors, Emerging Storage, and Database System Design: Challenges and Opportunities

This article reviews the evolution of modern multi‑core processors and non‑volatile memory, analyzes their impact on database system architecture, discusses cache‑friendly designs, distributed logging, and benchmark results, and highlights the opportunities and challenges for DBMS developers in the era of NVRAM.

Tencent Architect
Tencent Architect
Tencent Architect
Modern Processors, Emerging Storage, and Database System Design: Challenges and Opportunities

The talk, presented by Anduin, a senior engineer at Tencent's infrastructure division, covers the development of modern processors and new storage technologies and their implications for database systems.

Modern processors: Since 2005 CPU manufacturers shifted from frequency scaling to multi‑core designs due to power and manufacturing limits. Many‑core (hundreds to thousands of cores) architectures are now common, but the memory‑wall problem—dramatically increased memory access latency—has become a critical bottleneck.

New storage devices: Non‑volatile memory (NVM) such as Intel's 3D XPoint combines the persistence of disks with the speed of DRAM. Its key characteristics are byte‑addressability, low latency, low power, long lifetime, large capacity, and high density. Phase‑change memory (PCM) is highlighted as the most mature NVM technology, offering two resistance states (crystalline/amorphous) that map to binary values.

PCM parameters: Compared with flash, PCM provides two orders of magnitude lower read/write latency and higher endurance, while offering comparable capacity. Its density is 2‑4× that of DRAM, and idle power consumption is only about 1% of DRAM.

DBMS design challenges: Traditional database architectures, designed for disk‑I/O bottlenecks, struggle on many‑core processors and NVM. The mismatch between rapidly evolving hardware and legacy DBMS designs leads to severe performance issues under high concurrency.

RAM‑locality principle: Emphasized by James Gray, data and program locality is essential to mitigate the CPU‑memory speed gap. Techniques include columnar storage for OLAP workloads, cache‑friendly data structures, and vectorized query execution to reduce function‑call overhead.

Lock management and critical sections: Coarse‑grained locks and large critical sections cause contention on many‑core systems. Examples include PostgreSQL's PGPROC structure and the adoption of finer‑grained lock tables, inheritance locks, and lock‑locality optimizations to improve scalability.

Distributed logging and write‑behind logging: Traditional write‑ahead logging (WAL) uses a global lock, limiting concurrency. Research from CMU proposes write‑behind logging for NVM, where dirty pages are written directly to NVM and logs are persisted later, enabling near‑instant recovery and 30% performance gains on NVM.

Benchmarks: TPC‑C, TPC‑H, and TPC‑DS are discussed as workload generators for OLTP and OLAP performance evaluation, illustrating the impact of the proposed optimizations.

Conclusion: Application demands, industry data, and hardware advances drive DBMS evolution. In the multi‑core and memory‑computing era, designers must focus on scalability and data locality, while NVM promises to shift system architecture away from I/O constraints.

performance optimizationDatabasesbenchmarkingNon-Volatile Memoryprocessor architecturecache locality
Tencent Architect
Written by

Tencent Architect

We share insights on storage, computing, networking and explore leading industry technologies together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.