Operations 6 min read

How to Separate SSD and SATA OSDs in Ceph Using Custom CRUSH Rules

This guide demonstrates how to customize Ceph's CRUSH map to separate SSD and SATA OSDs into distinct buckets, create dedicated crush rules, compile and apply the new map, and verify that data is correctly placed on the appropriate storage devices.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How to Separate SSD and SATA OSDs in Ceph Using Custom CRUSH Rules

Background

In a Ceph cluster that contains both SSD and SATA drives, the default CRUSH placement distributes placement groups (PGs) evenly across all OSDs, which can waste SATA capacity and degrade performance. By customizing the CRUSH map, high‑IO data can be stored on SSD OSDs while less‑critical data uses SATA OSDs, improving performance and reducing cost.

Get the current CRUSH map and decompile it

<code>ceph osd getcrushmap -o crushmapdump
crushtool -d crushmapdump -o crushmapdump-decompiled
</code>

Edit the decompiled file and, after the root default section, add two new buckets: one for SSD OSDs (osd.0, osd.2, osd.4) and one for SATA OSDs (osd.1, osd.3, osd.5).

<code>root ssd {
    id -5
    alg straw
    hash 0
    item osd.0 weight 0.010
    item osd.2 weight 0.010
    item osd.4 weight 0.010
}

root sata {
    id -6
    alg straw
    hash 0
    item osd.1 weight 0.010
    item osd.3 weight 0.010
    item osd.5 weight 0.010
}
</code>

Create CRUSH rules

Each pool uses its own CRUSH rule set. Define a rule for SSD‑backed pools and another for SATA‑backed pools.

<code>rule ssd-pool {
    ruleset 1
    type replicated
    min_size 1
    max_size 10
    step take ssd   # use SSD bucket
    step chooseleaf firstn 0 type osd
    step emit
}
rule sata-pool {
    ruleset 2
    type replicated
    min_size 1
    max_size 10
    step take sata   # use SATA bucket
    step chooseleaf firstn 0 type osd
    step emit
}
</code>

Compile and inject the new CRUSH map

<code>crushtool -c crushmapdump-decompiled -o crushmapdump-compiled
ceph osd setcrushmap -i crushmapdump-compiled
</code>

Add the following line to

ceph.conf

to prevent Ceph from resetting the map on restart:

<code>osd_crush_update_on_start=false
</code>

After applying the map, view the OSD tree to confirm the bucket layout (images omitted for brevity).

Create and verify the SSD pool

<code>ceph osd pool create ssd-pool 8 8
</code>

Set the pool’s CRUSH rule to the SSD rule (rule ID 1) and verify that the pool uses the SSD bucket.

<code>ceph osd pool set ssd-pool crush_rule ssd-pool
</code>

Similarly, create a SATA pool and assign it rule ID 2, confirming that it uses the SATA bucket.

Write data to both pools and validate placement

<code>rados -p &lt;pool_name&gt; put &lt;object_name&gt; &lt;file_name&gt;
</code>

Inspect the objects to ensure they reside on the correct OSD sets: SSD OSDs are [0, 2, 4] and SATA OSDs are [1, 3, 5]; the verification screenshots show the expected distribution.

storageSSDCephOSDCRUSHSATA
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.