Tag

page cache

0 views collected around this technical thread.

Deepin Linux
Deepin Linux
Jun 10, 2025 · Fundamentals

How Linux Memory Reclamation Works: Zones, Swap, and Compression Explained

This article explains Linux's memory reclamation mechanisms, covering the role of memory as the system's bloodstream, the three reclamation paths (fast, direct, kswapd), zone watermarks, page cache structures, reverse mapping, and how swap and compression are used to keep the system stable under memory pressure.

LinuxMemory ManagementMemory Reclamation
0 likes · 52 min read
How Linux Memory Reclamation Works: Zones, Swap, and Compression Explained
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Jun 3, 2025 · Big Data

Kafka High-Concurrency Core Design Explained

This article explains how Kafka achieves high concurrency through its distributed broker cluster, partitioned topics, sequential log writes, message compression, asynchronous producer mechanisms, and OS page‑cache techniques, illustrating the combined architectural and performance optimizations that enable massive throughput.

Kafkaasynchronous producerdistributed architecture
0 likes · 4 min read
Kafka High-Concurrency Core Design Explained
Cognitive Technology Team
Cognitive Technology Team
May 21, 2025 · Fundamentals

Understanding Linux Page Cache: Concepts, Workflow, and Optimization

This article explains the Linux Page Cache mechanism, covering its core concepts, read/write workflows, data consistency, optimization strategies, real-world use cases, advanced topics, common misconceptions, and practical tips for improving system performance and resource management.

I/O optimizationLinuxMemory Management
0 likes · 8 min read
Understanding Linux Page Cache: Concepts, Workflow, and Optimization
Tencent Cloud Developer
Tencent Cloud Developer
Jan 3, 2025 · Operations

Deep Dive into Linux Kernel Page Cache Xarray: Problem, Analysis, and Optimizations

The article examines a long‑standing hidden bug in the Linux kernel’s page‑cache Xarray that caused occasional data loss with Large Folio support, details its discovery and fix by the TencentOS team, and shows how consolidating multiple tree walks into a single walk in Linux 6.10 reduced latency and improved performance by about ten percent.

Bug FixLinux KernelPerformance Optimization
0 likes · 27 min read
Deep Dive into Linux Kernel Page Cache Xarray: Problem, Analysis, and Optimizations
Deepin Linux
Deepin Linux
Nov 1, 2024 · Fundamentals

Will Data Be Lost When a Process Crashes During File Write?

This article examines the conditions under which data may be lost when a Linux process crashes while writing a file, explaining page cache behavior, the roles of stdio versus system calls, dirty page handling, write‑back mechanisms, and strategies such as fflush, fsync, and direct I/O to ensure data integrity.

Data IntegrityFile I/OLinux
0 likes · 22 min read
Will Data Be Lost When a Process Crashes During File Write?
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Jun 4, 2024 · Big Data

Why Kafka Can Achieve Million‑Message‑Per‑Second Throughput: Disk Sequential Write, Zero‑Copy, Page Cache, and Memory‑Mapped Files

The article explains how Kafka attains ultra‑high write throughput by leveraging disk sequential writes, zero‑copy data transfer, operating‑system page cache, and memory‑mapped files, detailing each technique’s impact on latency, CPU usage, and overall performance.

KafkaZero Copybig data
0 likes · 5 min read
Why Kafka Can Achieve Million‑Message‑Per‑Second Throughput: Disk Sequential Write, Zero‑Copy, Page Cache, and Memory‑Mapped Files
OPPO Kernel Craftsman
OPPO Kernel Craftsman
Feb 2, 2024 · Fundamentals

Linux Shared Memory (shmem) Deep Dive: Architecture, Implementation, and Practice

Linux’s shmem subsystem provides hybrid anonymous/file‑backed pages that enable diverse shared‑memory scenarios—parent‑child communication, IPC, tmpfs, Android ashmem, and memfd—by using APIs such as shmem_file_setup, handling page faults through cache and swap mechanisms, and employing a specialized reclamation process to manage memory efficiently.

Linux kernelMemory ManagementVirtual Memory
0 likes · 10 min read
Linux Shared Memory (shmem) Deep Dive: Architecture, Implementation, and Practice
Deepin Linux
Deepin Linux
Aug 18, 2023 · Fundamentals

Design and Implementation of a High‑Concurrency Memory Pool in C++

This article presents a comprehensive design and implementation of a high‑concurrency memory pool in C++, covering concepts such as fixed‑size block allocation, thread‑local caches, central and page caches, lock‑free techniques, span management, and performance benchmarking against standard malloc.

C++allocatorconcurrency
0 likes · 57 min read
Design and Implementation of a High‑Concurrency Memory Pool in C++
OPPO Kernel Craftsman
OPPO Kernel Craftsman
Jun 30, 2023 · Fundamentals

Understanding Linux Kernel Folio: From Page to Folio and Its Design Rationale

Linux kernel introduced the struct folio abstraction to replace ad‑hoc compound‑page tricks, giving a clear collection‑oriented representation for power‑of‑two page groups such as THP and HugeTLB, and providing dedicated APIs that eliminate naming confusion, unify reference handling, and make memory‑management code safer and easier to understand.

Linux KernelMemory Managementcompound page
0 likes · 15 min read
Understanding Linux Kernel Folio: From Page to Folio and Its Design Rationale
Architects' Tech Alliance
Architects' Tech Alliance
Mar 19, 2023 · Fundamentals

Storage Media Performance, Kernel/User Mode, DMA, Zero‑Copy, and PageCache

The article explains how different storage media affect I/O speed, describes kernel and user mode separation, introduces DMA and zero‑copy techniques such as mmap + write and sendfile, and discusses PageCache behavior, advantages, drawbacks, and tuning for high‑performance file transfers.

DMAI/OStorage
0 likes · 18 min read
Storage Media Performance, Kernel/User Mode, DMA, Zero‑Copy, and PageCache
Tencent Cloud Developer
Tencent Cloud Developer
Jan 9, 2023 · Operations

Linux I/O Optimization: Zero-Copy Techniques

The article explains Linux I/O optimization through zero‑copy techniques—such as mmap + write, sendfile, and splice—detailing memory hierarchy, the benefits of reducing user‑kernel copies, the suitability of async + direct I/O for large file transfers, real‑world uses like Kafka and Nginx, and inherent platform limitations.

DMALinux I/OMMU
0 likes · 32 min read
Linux I/O Optimization: Zero-Copy Techniques
Refining Core Development Skills
Refining Core Development Skills
Nov 13, 2022 · Fundamentals

Understanding JDK NIO File I/O and Linux Kernel Mechanisms: Buffered vs Direct IO, Page Cache, and Dirty Page Management

This article provides a comprehensive analysis of how JDK NIO performs file read and write operations by examining the underlying Linux kernel mechanisms, including the differences between Buffered and Direct IO, the structure and management of the page cache, file readahead algorithms, and the kernel parameters governing dirty page writeback.

Buffered IODirect IODirty Page Writeback
0 likes · 78 min read
Understanding JDK NIO File I/O and Linux Kernel Mechanisms: Buffered vs Direct IO, Page Cache, and Dirty Page Management
Coolpad Technology Team
Coolpad Technology Team
Oct 28, 2022 · Fundamentals

Understanding Linux Kernel Readahead: Concepts, Benefits, Drawbacks, and Code Analysis

This article explains the design background, performance benefits, potential drawbacks, synchronous and asynchronous mechanisms, key data structures, operational principles, illustrative examples, and critical code paths of Linux kernel file readahead, providing a comprehensive technical overview for developers and system engineers.

File SystemI/O performanceLinux
0 likes · 15 min read
Understanding Linux Kernel Readahead: Concepts, Benefits, Drawbacks, and Code Analysis
Architects' Tech Alliance
Architects' Tech Alliance
Jul 10, 2022 · Fundamentals

Understanding Linux I/O: Storage Hierarchy, Page Cache, and System Call Mechanisms

This article explains Linux storage hierarchy, the roles of user‑space and kernel caches, the three‑layer I/O stack, page‑cache synchronization policies, file‑operation atomicity, locking mechanisms, and performance testing techniques for HDD and SSD devices.

I/OLinuxStorage
0 likes · 16 min read
Understanding Linux I/O: Storage Hierarchy, Page Cache, and System Call Mechanisms
IT Services Circle
IT Services Circle
May 27, 2022 · Backend Development

Design Principles of a High‑Performance Message Broker (RocketMQ)

The article explains how to redesign a high‑traffic user‑registration flow by introducing an asynchronous queue layer, detailing the broker architecture, commitlog, consumeQueue, page‑cache, mmap, topic/tag routing, high‑availability strategies, and the role of a nameserver in a RocketMQ‑style system.

High AvailabilityMMAPMessage Queue
0 likes · 26 min read
Design Principles of a High‑Performance Message Broker (RocketMQ)
IT Services Circle
IT Services Circle
Mar 19, 2022 · Fundamentals

Understanding the Linux File I/O Stack: VFS, Filesystem, Block Layer, and SCSI

This article explains the Linux file I/O stack by outlining the path from user space through system calls to the kernel layers—VFS, filesystem, block layer, and SCSI—detailing each layer's role, page cache mechanisms, writeback processes, and direct I/O implementations with code examples.

Direct I/OFile I/OLinux
0 likes · 11 min read
Understanding the Linux File I/O Stack: VFS, Filesystem, Block Layer, and SCSI
Sohu Tech Products
Sohu Tech Products
Mar 9, 2022 · Fundamentals

Understanding the Linux File I/O Stack: VFS, Filesystem, Block Layer, and SCSI

This article explains the Linux file I/O stack by outlining its clear route from user space to hardware, detailing the roles of VFS, the filesystem, the block layer, and the SCSI driver, and then dives into page cache mechanisms and code paths for both buffered and direct I/O.

FilesystemI/O StackLinux
0 likes · 9 min read
Understanding the Linux File I/O Stack: VFS, Filesystem, Block Layer, and SCSI
360 Tech Engineering
360 Tech Engineering
Jan 21, 2022 · Fundamentals

Understanding Disk I/O: Read, Write, DMA, Page Cache, mmap and Performance Optimizations

This article explains the principles of disk I/O, covering read/write workflows, DMA acceleration, page cache, memory‑mapped files, buffering techniques, Linux kernel parameters and practical Java code examples to illustrate how to reduce CPU involvement and improve overall system performance.

BufferingDMADisk I/O
0 likes · 12 min read
Understanding Disk I/O: Read, Write, DMA, Page Cache, mmap and Performance Optimizations
360 Quality & Efficiency
360 Quality & Efficiency
Jan 21, 2022 · Fundamentals

Understanding Disk I/O: Read, Write, DMA, Page Cache, mmap and Performance Optimizations

The article explains the fundamentals of disk I/O, covering read/write processes, IO interrupts, DMA, page cache, mmap, buffered versus unbuffered file operations, ByteBuffer usage, Linux dirty‑page parameters, and how these mechanisms affect application performance and reliability.

DMADisk I/OLinux
0 likes · 12 min read
Understanding Disk I/O: Read, Write, DMA, Page Cache, mmap and Performance Optimizations
Architecture Digest
Architecture Digest
Jan 5, 2022 · Fundamentals

Traditional System Call I/O, Read/Write Operations, and High‑Performance Optimizations in Linux

This article explains how Linux implements traditional system‑call I/O using read() and write(), details the data‑copy and context‑switch overhead of read and write operations, describes network and disk I/O, and introduces high‑performance techniques such as zero‑copy, multiplexing, and page‑cache optimizations.

I/OLinuxOperating Systems
0 likes · 12 min read
Traditional System Call I/O, Read/Write Operations, and High‑Performance Optimizations in Linux