Backend Development 10 min read

Eureka vs Zookeeper: AP vs CP Trade‑offs in Service Registry Design

The article compares Eureka and Zookeeper as service registry solutions, explaining how Eureka follows an AP model with high availability and eventual consistency, while Zookeeper adopts a CP model prioritizing strong consistency, and discusses their suitable scenarios, limitations, and design considerations for distributed systems.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Eureka vs Zookeeper: AP vs CP Trade‑offs in Service Registry Design

Preface

In distributed architectures the CAP theorem is inevitable: redundancy introduces partition tolerance (P), and the replicated data brings a trade‑off between availability (A) and strong consistency (C). Most production systems choose to sacrifice strong consistency in favor of eventual consistency.

Service registry data (IP + port) may be inconsistent across queries, leading to load imbalance among nodes.
As long as the registry converges to a consistent state within the SLA (e.g., 1 s), traffic quickly becomes statistically uniform, making an eventual‑consistency design acceptable in practice.

1. Eureka – AP

Eureka guarantees availability and achieves eventual consistency. All Eureka nodes are peers; the failure of a few nodes does not affect the remaining ones, which continue to provide registration and lookup services. Clients automatically switch to another node if a connection fails, ensuring the registry stays available even though the returned data may not be the latest.

2. Zookeeper – CP

Zookeeper pauses service during leader election, making the system unavailable until a new leader is elected. After election, it offers high availability but always prioritises consistency over availability.

2.1 Application Scenarios

Asynchronous result notification for message queues

Distributed locks

Metadata or configuration centers (e.g., Dubbo, Kafka)

High‑availability failover

Primary‑backup switching

2.2 Scenarios Not Recommended for Zookeeper

When network partition isolates a data center, Zookeeper may become write‑unavailable, preventing service deployment, scaling, or restart within that zone, which violates the principle that a registry must not break intra‑zone connectivity.

2.3 Scalability of Zookeeper

Zookeeper’s write path does not scale horizontally; adding nodes does not increase write throughput. A practical workaround is to split business domains across multiple Zookeeper clusters, but this adds operational complexity and may still violate the registry’s responsibility to keep services reachable.

2.4 Persistent Storage

Zookeeper’s ZAB protocol logs every write and periodically snapshots in‑memory data to disk, ensuring consistency and durability. However, the core data of a service registry—real‑time healthy service addresses—does not require persistence; only metadata such as version, group, data‑center, weight, and auth policies need durable storage.

2.5 Disaster Recovery

If the entire registry crashes, service‑to‑service calls should remain unaffected. The registry should be a weak dependency, used only for registration, deregistration, and scaling events. Clients need a cache (client snapshot) and robust health‑check mechanisms to survive full registry outages.

Zookeeper Health Checks

Health monitoring often relies on Zookeeper session activity and ephemeral nodes, which can be misleading because a healthy TCP connection does not guarantee the service itself is healthy. Registries should expose richer health‑check APIs that let services define their own health criteria.

References

https://www.cnblogs.com/hszwx/p/11595160.html

InfoQ (WeChat: infoqchina)

Distributed SystemsCAP theoremZookeeperEurekaconsistencyService Registryavailability
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.