Tag

Memory Architecture

1 views collected around this technical thread.

Deepin Linux
Deepin Linux
Feb 4, 2025 · Fundamentals

Understanding CPU Cache: Architecture, Hierarchy, and Optimization Techniques

This article explains the fundamental role of CPU cache in bridging the speed gap between processors and memory, covering cache hierarchy, locality principles, write policies, coherence protocols, and practical code optimizations such as data alignment and loop restructuring to improve performance.

CPU cacheCache CoherenceData Alignment
0 likes · 31 min read
Understanding CPU Cache: Architecture, Hierarchy, and Optimization Techniques
Architects' Tech Alliance
Architects' Tech Alliance
Aug 13, 2024 · Fundamentals

Understanding High Bandwidth Memory (HBM): Architecture, Benefits, and Applications

High Bandwidth Memory (HBM) is a DRAM technology that uses stacked chips, TSV, and micro‑bump interconnects to deliver ultra‑high data rates, lower power consumption, and compact form factor, addressing the bandwidth, latency, power, space, thermal, and complexity challenges of traditional 2D memory in GPUs, AI, HPC, and data‑center workloads.

Artificial IntelligenceHBMHigh Bandwidth Memory
0 likes · 10 min read
Understanding High Bandwidth Memory (HBM): Architecture, Benefits, and Applications
Refining Core Development Skills
Refining Core Development Skills
May 14, 2024 · Fundamentals

Understanding Memory Physical Structure: Ranks, Chip Width, and Internal Organization

This article explains the physical structure of computer memory modules, interpreting label strings such as "2R*8" to describe ranks and chip bit-width, and details how chips, banks, and matrix cells are organized to support burst I/O and cache lines.

Hardware fundamentalsMemory ArchitectureRAM
0 likes · 14 min read
Understanding Memory Physical Structure: Ranks, Chip Width, and Internal Organization
Architects' Tech Alliance
Architects' Tech Alliance
Feb 8, 2023 · Artificial Intelligence

Computing‑in‑Memory (CiM) Technology: Concepts, History, Advantages, Classifications and Application Scenarios

This article provides a comprehensive overview of Computing‑in‑Memory technology, covering its definition, historical evolution, performance advantages over traditional von Neumann architectures, various technical classifications, storage‑media choices, market drivers, and its pivotal role in AI and big‑data workloads across edge, cloud and automotive domains.

AI accelerationMemory Architecturebig data
0 likes · 17 min read
Computing‑in‑Memory (CiM) Technology: Concepts, History, Advantages, Classifications and Application Scenarios
Architects' Tech Alliance
Architects' Tech Alliance
Dec 22, 2017 · Fundamentals

NAND Flash Manufacturing Process, Architecture, and Key Metrics

This article explains the NAND Flash production flow, hierarchical structure from wafer to cell, and critical performance indicators such as endurance, data retention, bit error rates, and factors influencing SSD lifespan, providing a comprehensive overview for storage engineers and designers.

Data RetentionEnduranceMemory Architecture
0 likes · 8 min read
NAND Flash Manufacturing Process, Architecture, and Key Metrics
Architects' Tech Alliance
Architects' Tech Alliance
Nov 12, 2017 · Fundamentals

Evolution of SSD Storage: From SATA and PCIe to NVMe and the Emerging Role of Storage Class Memory

The article traces the development of SSD technology—from early SATA and SAS drives through PCIe and NVMe standards—explains Intel Optane's impact, introduces various Storage Class Memory types such as PRAM, ReRAM, MRAM and NRAM, and discusses their current applications and future challenges in high‑performance storage systems.

Memory ArchitectureNVMeOptane
0 likes · 9 min read
Evolution of SSD Storage: From SATA and PCIe to NVMe and the Emerging Role of Storage Class Memory