Understanding ClickHouse-Keeper: Features, Configuration, Commands, and Migration from ZooKeeper
ClickHouse‑Keeper, a C++‑based ZooKeeper replacement using the Raft algorithm, offers linearizable reads, compression, and easier deployment; this article explains its advantages, configuration template, startup command, parameter details, health checks, and step‑by‑step migration from ZooKeeper using the ClickHouse‑Keeper‑Converter tool.
1. What is ClickHouse‑Keeper
ClickHouse‑Keeper was introduced in ClickHouse 21.8 and completed by 21.12. It is a ZooKeeper replacement written in C++ and implemented with the Raft algorithm, providing linearizable read/write capabilities.
2. Comparison between ZooKeeper and ClickHouse‑Keeper
ZooKeeper has several pain points for ClickHouse: Java implementation, operational inconvenience, requirement for independent deployment, zxid overflow, uncompressed snapshots and logs, and lack of linearizable reads. ClickHouse‑Keeper addresses these with a C++ implementation, unified tech stack, independent or integrated deployment, no zxid overflow, better read performance, snapshot/log compression and verification, and linearizable read/write.
3. Configuration
Configuration is similar to previous cluster setups; ClickHouse‑Keeper runs only when a <keeper_server> tag is present. A template configuration file is shown in the original article (image omitted).
4. Startup Command
clickhouse-keeper --config /etc/your_path_to_config/config.xml5. Parameter Description
tcp_port – client connection port (default 2181).
tcp_port_secure – SSL port for client‑server communication.
server_id – unique ID for each keeper node.
log_storage_path – path for log files, preferably on high‑IO storage.
snapshot_storage_path – path for snapshots.
coordination_settings – includes operation_timeout_ms, min_session_timeout_ms, session_timeout_ms, dead_session_check_period_ms, heart_beat_interval_ms, election_timeout_lower_bound_ms, rotate_log_storage_interval, reserved_log_items, snapshot_distance, snapshots_to_keep, max_requests_batch_size, raft_logs_level, auto_forwarding, shutdown_timeout.
raft_configuration – defines Id, Hostname, and Port for each node.
6. Health Checks
6.1 ruok
Run echo ruok | nc 127.0.0.1 9181 ; a successful response is imok .
6.2 Verify Keeper cluster in ClickHouse
Query the system.zookeeper table; the output confirms successful installation (image omitted).
7. Migrating from ZooKeeper to ClickHouse‑Keeper
The official ClickHouse‑Keeper‑Converter tool can dump ZooKeeper data into a snapshot that ClickHouse‑Keeper can load. Migration steps:
Stop all ZooKeeper nodes.
Stop the ZooKeeper leader to generate a snapshot.
Run clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --zookeeper-snapshots-dir /var/lib/zookeeper/version-2 --output-dir /path/to/clickhouse/keeper/snapshots to create the snapshot.
Start ClickHouse‑Keeper, which will load the generated snapshot.
Reference: https://clickhouse.com/docs/en/operations/clickhouse-keeper/
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.