Databases 8 min read

ClickHouse vs Doris vs Redis: Real‑World Query Performance Test with Flink

Using a 600k‑record IP range dataset, we built identical tables in ClickHouse and Doris, and a Redis skip‑list store, then ran three Flink‑Kafka streaming jobs to compare query latency across the three databases under varying traffic rates, revealing Redis as fastest, ClickHouse second, Doris slowest.

ITPUB
ITPUB
ITPUB
ClickHouse vs Doris vs Redis: Real‑World Query Performance Test with Flink

Design Test Tasks

We stored a dimension table of >600,000 IP‑range records in three back‑ends – ClickHouse, Doris and Redis (skip‑list) – using identical table engines, index fields and column types to ensure a fair comparison. The query pattern cannot use a simple JOIN; each lookup must satisfy a range condition start_ip <= ip <= end_ip and the data is continuously updated, so the workload is a "one‑by‑one" lookup.

To compare the three databases we built three Flink streaming jobs that read IP records from Kafka, query a specific store, and write the result into a ClickHouse result table:

Task 1 : Kafka → Doris query → ClickHouse write

Task 2 : Kafka → ClickHouse query → ClickHouse write

Task 3 : Kafka → Redis query → ClickHouse write

ClickHouse Result Table Design

The result table captures per‑record latency and source information. Its schema is: ip – the IPv4 string to look up. ip_and_address – concatenated geographic and ISP information returned by the lookup. hit_source – identifier of the source database (Doris, ClickHouse or Redis). data_time – timestamp when Flink received the IP from Kafka. insert_time – timestamp when the lookup result was obtained. time_gapinsert_time minus data_time in milliseconds, i.e., the latency of a single query.

Query Interface Design

All three stores are accessed from Flink as follows:

ClickHouse : standard MySQL‑compatible JDBC driver.

Doris : MySQL‑compatible JDBC driver (same driver class as ClickHouse).

Redis : a custom Jedis wrapper that performs the range lookup on the skip‑list.

The only notable difference is the connection‑pool management required by each client in Flink’s distributed runtime.

Testing Execution

We ran the three tasks under four traffic levels – 10, 100, 500 and 2000 records per second – using Kafka as the upstream source. For each level we recorded the distribution of time_gap. Representative latency averages are shown below (images are retained from the original experiment):

Latency chart 1
Latency chart 1
Latency chart 2
Latency chart 2
Latency chart 3
Latency chart 3

Conclusions

Latency measurements across all traffic levels lead to three clear conclusions:

Redis provides the fastest and most stable query latency, typically 1–2 ms per record.

ClickHouse ranks second, with average latencies around 6–7 ms , but exhibits larger variance than Redis.

Doris performs the worst, averaging about 10 ms and showing greater fluctuations; under high traffic it even failed during an overnight endurance test.

For the current use‑case of dimension‑table lookups, the Redis‑based solution is the most recommended, ClickHouse is acceptable as a secondary option, and Doris is not advisable.

FlinkRedisKafkaClickHouseDatabase Performancedoris
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.