Operations 11 min read

Boost Ceph Performance: Mastering Cache Pools and CRUSH Rules

This article explains how Ceph cache pools work, compares read‑only and write‑back cache types, provides step‑by‑step commands for creating and configuring cache pools, adjusting CRUSH rules to target SSDs, and outlines safe removal procedures for both cache modes.

Ops Development Stories
Ops Development Stories
Ops Development Stories
Boost Ceph Performance: Mastering Cache Pools and CRUSH Rules

1. How Cache Pools Work

In Ceph, a cache pool is a special storage pool that speeds up data access by keeping hot data on faster devices such as SSDs. When a client requests data, Ceph first checks the cache pool; a hit returns the data immediately, otherwise the data is fetched from the backing pool and cached for future reads.

Note: Cache tiering was deprecated in the Reef release.

2. Cache Pool Types

Read‑Only Cache (Read Cache)

Features:

Cache Type: Accelerates read operations by storing read data in the cache pool.

Data Consistency: All writes go directly to the primary pool, so consistency is maintained by the primary pool.

Use Cases: Ideal for read‑heavy, write‑light scenarios such as video on demand or static content delivery.

Operation:

Client read request checks the cache pool first.

If data is present, it is returned quickly from the cache.

If not, data is read from the primary pool and then cached for subsequent accesses.

Write‑Back Cache (Writeback Cache)

Features:

Cache Type: Speeds up write operations by initially writing to the cache pool, then asynchronously flushing to the primary pool.

Data Consistency: Cached writes may be out‑of‑sync with the primary pool; eventual consistency is ensured by flushing the cache.

Use Cases: Suited for write‑heavy workloads such as log ingestion or high‑throughput databases.

Operation:

Write request is written to the cache pool and the client receives an immediate success response.

The cache data is flushed to the primary pool in the background.

Read requests can still retrieve the latest data from the cache.

3. Configuring a Cache Pool

Steps to create and attach a cache pool to a backing pool:

1. Create the cache pool

<code>ceph osd pool create my_cache_pool 128</code>

2. Add the cache pool to the backing pool and set it to write‑back mode

<code>ceph osd tier add libvirt-pool cache_pool
ceph osd tier cache-mode cache_pool writeback</code>

3. Bind the cache pool to the backing pool

This command redirects client I/O to the cache pool.

<code>ceph osd tier set-overlay libvirt-pool cache_pool</code>

4. (Optional) Enable additional write‑back settings

<code>ceph osd pool set cache_pool hit_set_type bloom
ceph osd pool set cache_pool hit_set_count 1
ceph osd pool set cache_pool hit_set_period 3600
ceph osd pool set cache_pool target_max_bytes 10737418240
ceph osd pool set cache_pool target_max_objects 10000
ceph osd pool set cache_pool min_read_recency_for_promote 1
ceph osd pool set cache_pool min_write_recency_for_promote 1
ceph osd pool set cache_pool min_read_recency 1
ceph osd pool set cache_pool min_write_recency 1
ceph osd pool set cache_pool target_dirty_ratio 0.4
ceph osd pool set cache_pool cache_target_dirty_high_ratio 0.6
ceph osd pool set cache_pool cache_target_full_ratio 0.8</code>

Configuring CRUSH Class for SSD Placement

After creating a cache pool, you must ensure its data is placed on SSD OSDs by modifying CRUSH rules.

View existing CRUSH rules

<code>ceph osd crush rule dump</code>

Create a new CRUSH rule for SSDs (assuming SSD devices are already tagged)

<code>ceph osd crush rule create-replicated replicated_ssd default host ssd
ceph osd crush rule create-replicated replicated_hdd default host hdd</code>

Bind the cache pool to the new SSD CRUSH rule

<code># ceph osd pool set cache_pool crush_rule replicated_ssd
set pool 3 crush_rule to replicated_ssd
# ceph osd pool set libvirt-pool crush_rule replicated_hdd
set pool 2 crush_rule to replicated_hdd</code>

Verify that the cache pool and the backing pool use different CRUSH rules.

4. Deleting a Cache Pool

Removal steps differ for read‑only and write‑back caches.

Delete Read‑Only Cache

<code>ceph osd tier cache-mode cache_pool none</code>

Unbind from the backing pool:

<code>ceph osd tier remove libvirt-pool cache_pool</code>

Delete Write‑Back Cache

<code>ceph osd tier cache-mode cache_pool proxy</code>

Ensure all objects are flushed:

<code>rados -p cache_pool ls</code>

If objects remain, flush them manually:

<code>rados -p cache_pool cache-flush-evict-all</code>

Remove the overlay and detach the cache pool:

<code>ceph osd tier remove-overlay libvirt-pool
ceph osd tier remove libvirt-pool cache_pool</code>

Finally, delete the cache pool itself:

<code>ceph osd pool delete cache-pool cache_pool --yes-i-really-really-mean-it</code>
performancelinuxstorageCephCRUSHcache pool
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.