Fundamentals 13 min read

How to Build a Reliable SSD Benchmark: Environment, Tools, and Test Scripts

This article explains how to evaluate SSD performance by detailing the benchmark environment, required hardware, preferred testing tools like fio, essential disk requirements, common pitfalls, and a comprehensive set of test items and scripts for both raw disks and typical filesystems.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
How to Build a Reliable SSD Benchmark: Environment, Tools, and Test Scripts

How to evaluate SSD performance and select suitable products, focusing on benchmark environment and specific test items.

Benchmark Environment

1. Test Environment

SanDisk CloudSpeed 800G * 4 RAID 5

Micron 5100 ECO 960G * 4 RAID 5

E5 2630 V2 * 1

96GB DDR3 ECC

2. Tools

fio – cross‑platform I/O stress generator, powerful, supports many engines and tuning parameters, includes gnuplot scripts.

iometer – GUI‑based I/O generator, mainly for Windows (not used here).

hdparm – Linux disk utility for information, secure erase, ATA parameters, HPA.

smartctl – Linux tool for SMART information.

Various scripting languages (python, perl, php, go, shell, awk, c, c++) for processing fio data.

Excel – used for data visualization; any language can generate charts.

fio is preferred because it is widely used on Linux, has strong community support, and offers professional features compared to iometer, iozone, or sysbench.

fio can run multiple processes or threads via a job file to simulate specific I/O workloads and supports 13 I/O engines such as sync, mmap, libaio, posixaio, and others.

3. Disk Requirements

Use new disks or disks that have been securely erased.

Clear HPA (Host Protected Area) if previously set.

Avoid RAID unless testing RAID performance; otherwise test single disks.

Pitfalls Encountered

Avoid fio version 2.0.13; it caused high CPU usage, data spikes, and crashes.

Compiling a newer fio may require zlib‑dev and libaio‑dev.

4. Test Script

The tests were run directly from the command line rather than using a job file.

/usr/local/bin/fio --filename={FIO_TEST_FILE} --direct=1 --ioengine=libaio --group_reporting --lockmem=512M --time_based --userspace_reap --randrepeat=0 --norandommap --refill_buffers --rw=randrw --ramp_time=10 --log_avg_msec={LOG_AVG_MSEC} --name={TEST_SUBJECT} --write_lat_log={TEST_SUBJECT} --write_iops_log={TEST_SUBJECT} --disable_lat=1 --disable_slat=1 --bs=4k --size={TEST_FILE_SIZE} --runtime={RUN_TIME} --rwmixread={READ_PERCENTAGE} --iodepth={QUEUE_DEPTH} --numjobs={JOB_NUMS}

Key parameters explained:

--filename : path to test file or device (requires root for devices).

--direct=1 : bypass cache.

--ioengine=libaio : use Linux native asynchronous I/O engine.

--group_reporting : aggregate results from all processes.

--lockmem=512M : limit memory usage (effect unclear).

--time_based : continue testing for the specified time even after file I/O completes.

--rwmixread : read/write mix percentage (0 = read‑only, 100 = write‑only).

--userspace_reap : speed up asynchronous I/O completion.

--randrepeat=0 : make random data non‑repeatable across runs.

--norandommap : do not force coverage of every block.

--ramp_time : warm‑up period before logging (10 s used).

--name : test identifier.

--write_latency_log , --write_bw_log , --write_iops_log : log latency, bandwidth, and IOPS respectively.

--bs=4k : block size.

--size : test file size (e.g., 100G) or full disk.

--runtime : test duration (seconds) unless time_based overrides.

--log_avg_msec : log aggregation interval (1000 ms reduces memory usage).

Test Items to Focus On

2.1 Full‑disk without filesystem

Sequential read, write, and mixed read/write.

Queue depths QD1‑QD32.

Random read/write at 512 B, 4 KB, 8 KB, 16 KB, and 512 B‑256 KB ranges, with mixes 50/50, 30/70, 70/30.

Write amplification tests.

2.2 Full‑disk with common filesystems (ext4, ext3, XFS)

Same sequential and random tests as above.

Write amplification under different over‑provisioning (OP) levels: 7 % OP, 13 % OP, 27 % OP.

Running all tests can take many hours; typically a few representative workloads are selected for production environments.

Observations: for SATA SSDs, performance saturates around queue depth 32 with CPU usage ~40‑60 %; for high‑performance NVMe devices, saturation may occur near QD64, requiring careful job configuration.

Practical advice: store log files on a separate disk from the test SSD, ensure sufficient RAM for logging, split long runs into shorter segments, and consider disabling built‑in graph generation for large logs.

The next article will cover data processing.

benchmarkSSDfiostorage performanceIO testing
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.